67% found this document useful (3 votes)
5K views676 pages

Application Insights Overview Guide

This document is a table of contents for Application Insights documentation. It outlines sections on overviews, quickstarts, tutorials, concepts, how-to guides, samples, reference material, resources, and support for Application Insights. The sections cover monitoring various platforms, configuring Application Insights, analyzing telemetry data, automating tasks, developing custom monitoring, managing data, exporting data, and troubleshooting issues.

Uploaded by

santosh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
67% found this document useful (3 votes)
5K views676 pages

Application Insights Overview Guide

This document is a table of contents for Application Insights documentation. It outlines sections on overviews, quickstarts, tutorials, concepts, how-to guides, samples, reference material, resources, and support for Application Insights. The sections cover monitoring various platforms, configuring Application Insights, analyzing telemetry data, automating tasks, developing custom monitoring, managing data, exporting data, and troubleshooting issues.

Uploaded by

santosh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Table of Contents

Application Insights Documentation


Overview
What is Application Insights?
Performance monitoring overview
Quickstarts
.NET
.NET Core
[Link]
Java
Mobile
Tutorials
Find run-time exceptions
Find performance issues
Alert on application health
Understand users
Create custom dashboards
Concepts
Monitor Azure
Azure web apps
Azure Cloud Services
Monitor [Link] apps
Web apps
Web apps already live
Windows services
Windows desktop
[Link] Core
Console
Monitor Java apps
Web apps
Web apps - runtime
Docker apps
Monitor [Link] apps
[Link]
Monitor web pages
JavaScript
Monitor other platforms
SharePoint sites
More platforms
How-to guides
Plan and design
Deep diagnostics for web apps and services
Monitor performance in web applications
Separate development, test, and production
Monitor apps with multiple components
How do I ... in Application Insights?
Configure
Azure
[Link]
J2EE
Alerts
Smart Detection
Create a resource
Analyze
Application Insights portal
Visual Studio
Usage
Analytics
Automate
Azure PowerShell configuration
Create resources
Set alerts
Get Azure diagnostics
Automate with Microsoft Flow
Automate with an Azure Logic App
Develop
API for custom events and metrics
Track custom operations in .NET SDK
Filtering and preprocessing telemetry
Sampling
Manage
Manage pricing and data volume
Application Performance Monitoring using Application Insights for SCOM
Export
Continuous export
Export data model
Export to Power BI
Secure
Data collection, retention, and storage
Resources, roles, and access control
IP addresses
Troubleshoot
No data for .NET
Snapshot debugger
Analytics
Java
Usage analytics
Samples
Code samples
Reference
Analytics query language
.NET
Java
JavaScript
Data access API
Data model
Request
Dependency
Exception
Trace
Event
Metric
Context
Telemetry correlation
Code samples
Resources
Azure Roadmap
Languages and platforms
Pricing
Pricing calculator
News
Blog
Service updates
SDK release notes
Release notes for Developer Analytics Tools
FAQ
Help
MSDN forum
Stack Overflow
User Voice
Support
Videos
What is Application Insights?
11/15/2017 • 5 min to read • Edit Online

Application Insights is an extensible Application Performance Management (APM) service for web
developers on multiple platforms. Use it to monitor your live web application. It will automatically detect
performance anomalies. It includes powerful analytics tools to help you diagnose issues and to
understand what users actually do with your app. It's designed to help you continuously improve
performance and usability. It works for apps on a wide variety of platforms including .NET, [Link] and
J2EE, hosted on-premises or in the cloud. It integrates with your DevOps process, and has connection
points to a variety of development tools. It can monitor and analyze telemetry from mobile apps by
integrating with Visual Studio App Center and HockeyApp.

Take a look at the intro animation.

How does Application Insights work?


You install a small instrumentation package in your application, and set up an Application Insights
resource in the Microsoft Azure portal. The instrumentation monitors your app and sends telemetry data
to the portal. (The application can run anywhere - it doesn't have to be hosted in Azure.)
You can instrument not only the web service application, but also any background components, and the
JavaScript in the web pages themselves.
In addition, you can pull in telemetry from the host environments such as performance counters, Azure
diagnostics, or Docker logs. You can also set up web tests that periodically send synthetic requests to
your web service.
All these telemetry streams are integrated in the Azure portal, where you can apply powerful analytic and
search tools to the raw data.
What's the overhead?
The impact on your app's performance is very small. Tracking calls are non-blocking, and are batched
and sent in a separate thread.

What does Application Insights monitor?


Application Insights is aimed at the development team, to help you understand how your app is
performing and how it's being used. It monitors:
Request rates, response times, and failure rates - Find out which pages are most popular, at what
times of day, and where your users are. See which pages perform best. If your response times and
failure rates go high when there are more requests, then perhaps you have a resourcing problem.
Dependency rates, response times, and failure rates - Find out whether external services are
slowing you down.
Exceptions - Analyse the aggregated statistics, or pick specific instances and drill into the stack trace
and related requests. Both server and browser exceptions are reported.
Page views and load performance - reported by your users' browsers.
AJAX calls from web pages - rates, response times, and failure rates.
User and session counts.
Performance counters from your Windows or Linux server machines, such as CPU, memory, and
network usage.
Host diagnostics from Docker or Azure.
Diagnostic trace logs from your app - so that you can correlate trace events with requests.
Custom events and metrics that you write yourself in the client or server code, to track business
events such as items sold or games won.

Where do I see my telemetry?


There are plenty of ways to explore your data. Check out these articles:

Smart detection and manual alerts


Automatic alerts adapt to your app's normal patterns of
telemetry and trigger when there's something outside
the usual pattern. You can also set alerts on particular
levels of custom or standard metrics.

Application map
The components of your app, with key metrics and
alerts.

Profiler
Inspect the execution profiles of sampled requests.

Usage analysis
Analyze user segmentation and retention.

Diagnostic search for instance data


Search and filter events such as requests, exceptions,
dependency calls, log traces, and page views.

Metrics Explorer for aggregated data


Explore, filter, and segment aggregated data such as
rates of requests, failures, and exceptions; response
times, page load times.

Dashboards
Mash up data from multiple resources and share with
others. Great for multi-component applications, and for
continuous display in the team room.
Live Metrics Stream
When you deploy a new build, watch these near-real-
time performance indicators to make sure everything
works as expected.

Analytics
Answer tough questions about your app's performance
and usage by using this powerful query language.

Visual Studio
See performance data in the code. Go to code from
stack traces.

Snapshot debugger
Debug snapshots sampled from live operations, with
parameter values.

Power BI
Integrate usage metrics with other business intelligence.

REST API
Write code to run queries over your metrics and raw
data.
Continuous export
Bulk export of raw data to storage as soon as it arrives.

How do I use Application Insights?


Monitor
Install Application Insights in your app, set up availability web tests, and:
Set up a dashboard for your team room to keep an eye on load, responsiveness, and the performance
of your dependencies, page loads, and AJAX calls.
Discover which are the slowest and most failing requests.
Watch Live Stream when you deploy a new release, to know immediately about any degradation.
Detect, Diagnose
When you receive an alert or discover a problem:
Assess how many users are affected.
Correlate failures with exceptions, dependency calls and traces.
Examine profiler, snapshots, stack dumps, and trace logs.
Build, Measure, Learn
Measure the effectiveness of each new feature that you deploy.
Plan to measure how customers use new UX or business features.
Write custom telemetry into your code.
Base the next development cycle on hard evidence from your telemetry.

Get started
Application Insights is one of the many services hosted within Microsoft Azure, and telemetry is sent
there for analysis and presentation. So before you do anything else, you'll need a subscription to
Microsoft Azure. It's free to sign up, and if you choose the basic pricing plan of Application Insights,
there's no charge until your application has grown to have substantial usage. If your organization already
has a subscription, they could add your Microsoft account to it.
There are several ways to get started. Begin with whichever works best for you. You can add the others
later.
At run time: instrument your web app on the server. Avoids any update to the code. You need
admin access to your server.
IIS on-premises or on a VM
Azure web app or VM
J2EE
At development time: add Application Insights to your code. Allows you to write custom
telemetry and to instrument back-end and desktop apps.
Visual Studio 2013 update 2 or later.
Java in Eclipse or other tools
[Link]
Other platforms
Instrument your web pages for page view, AJAX and other client-side telemetry.
Analyze mobile app usage by integrating with Visual Studio App Center.
Availability tests - ping your website regularly from our servers.

Next steps
Get started at runtime with:
IIS server
J2EE server
Get started at development time with:
[Link]
Java
[Link]

Support and feedback


Questions and Issues:
Troubleshooting
MSDN Forum
StackOverflow
Your suggestions:
UserVoice
Blog:
Application Insights blog

Videos
Overview of Application Insights for DevOps
11/1/2017 • 14 min to read • Edit Online

With Application Insights, you can quickly find out how your app is performing and being used when it's live. If
there's a problem, it lets you know about it, helps you assess the impact, and helps you determine the cause.
Here's an account from a team that develops web applications:
"A couple of days ago, we deployed a 'minor' hotfix. We didn't run a broad test pass, but unfortunately some
unexpected change got merged into the payload, causing incompatibility between the front and back ends.
Immediately, server exceptions surged, our alert fired, and we were made aware of the situation. A few clicks
away on the Application Insights portal, we got enough information from exception callstacks to narrow down
the problem. We rolled back immediately and limited the damage. Application Insights has made this part of
the devops cycle very easy and actionable."
In this article we follow a team in Fabrikam Bank that develops the online banking system (OBS) to see how they
use Application Insights to quickly respond to customers and make updates.
The team works on a DevOps cycle depicted in the following illustration:

Requirements feed into their development backlog (task list). They work in short sprints, which often deliver
working software - usually in the form of improvements and extensions to the existing application. The live app is
frequently updated with new features. While it's live, the team monitors it for performance and usage with the help
of Application Insights. This APM data feeds back into their development backlog.
The team uses Application Insights to monitor the live web application closely for:
Performance. They want to understand how response times vary with request count; how much CPU, network,
disk, and other resources are being used; which application code slowed down performance; and where the
bottlenecks are.
Failures. If there are exceptions or failed requests, or if a performance counter goes outside its comfortable
range, the team needs to know rapidly so that they can take action.
Usage. Whenever a new feature is released, the team want to know to what extent it is used, and whether users
have any difficulties with it.
Let's focus on the feedback part of the cycle:

Detect poor availability


Marcela Markova is a senior developer on the OBS team, and takes the lead on monitoring online performance.
She sets up several availability tests:
A single-URL test for the main landing page for the app, [Link] She sets
criteria of HTTP code 200 and text 'Welcome!'. If this test fails, there's something seriously wrong with the
network or the servers, or maybe a deployment issue. (Or someone has changed the Welcome! message on the
page without letting her know.)
A deeper multi-step test, which logs in and gets a current account listing, checking a few key details on each
page. This test verifies that the link to the accounts database is working. She uses a fictitious customer id: a few
of them are maintained for test purposes.
With these tests set up, Marcela is confident that the team will quickly know about any outage.
Failures show up as red dots on the web test chart:

But more importantly, an alert about any failure is emailed to the development team. In that way, they know about
it before nearly all the customers.

Monitor Performance
On the overview page in Application Insights, there's a chart that shows a variety of key metrics.
Browser page load time is derived from telemetry sent directly from web pages. Server response time, server
request count, and failed request count are all measured in the web server and sent to Application Insights from
there.
Marcela is slightly concerned with the server response graph. This graph shows the average time between when
the server receives an HTTP request from a user's browser, and when it returns the response. It isn't unusual to see
a variation in this chart, as load on the system varies. But in this case, there seems to be a correlation between
small rises in the count of requests, and big rises in the response time. That could indicate that the system is
operating just at its limits.
She opens the Servers charts:

There seems to be no sign of resource limitation there, so maybe the bumps in the server response charts are just
a coincidence.

Set alerts to meet goals


Nevertheless, she'd like to keep an eye on the response times. If they go too high, she wants to know about it
immediately.
So she sets an alert, for response times greater than a typical threshold. This gives her confidence that she'll know
about it if response times are slow.
Alerts can be set on a wide variety of other metrics. For example, you can receive emails if the exception count
becomes high, or the available memory goes low, or if there is a peak in client requests.

Stay informed with Smart Detection Alerts


Next day, an alert email does arrive from Application Insights. But when she opens it, she finds it isn't the response
time alert that she set. Instead, it tells her there's been a sudden rise in failed requests - that is, requests that have
returned failure codes of 500 or more.
Failed requests are where users have seen an error - typically following an exception thrown in the code. Maybe
they see a message saying "Sorry we couldn't update your details right now." Or, at absolute embarrassing worst, a
stack dump appears on the user's screen, courtesy of the web server.
This alert is a surprise, because the last time she looked at it, the failed request count was encouragingly low. A
small number of failures is to be expected in a busy server.
It was also a bit of a surprise for her because she didn't have to configure this alert. Application Insights include
Smart Detection. It automatically adjusts to your app's usual failure pattern, and "gets used to" failures on a
particular page, or under high load, or linked to other metrics. It raises the alarm only if there's a rise above what it
comes to expect.
This is a very useful email. It doesn't just raise an alarm. It does a lot of the triage and diagnostic work, too.
It shows how many customers are affected, and which web pages or operations. Marcela can decide whether she
needs to get the whole team working on this as a fire drill, or whether it can be ignored until next week.
The email also shows that a particular exception occurred, and - even more interesting - that the failure is
associated with failed calls to a particular database. This explains why the fault suddenly appeared even though
Marcela's team has not deployed any updates recently.
Marcella pings the leader of the database team based on this email. She learns that they released a hot fix in the
past half hour; and Oops, maybe there might have been a minor schema change....
So the problem is on the way to being fixed, even before investigating logs, and within 15 minutes of it arising.
However, Marcela clicks the link to open Application Insights. It opens straight onto a failed request, and she can
see the failed database call in the associated list of dependency calls.
Detect exceptions
With a little bit of setup, exceptions are reported to Application Insights automatically. They can also be captured
explicitly by inserting calls to TrackException() into the code:

var telemetry = new TelemetryClient();


...
try
{ ...
}
catch (Exception ex)
{
// Set up some properties:
var properties = new Dictionary <string, string>
{{"Game", [Link]}};

var measurements = new Dictionary <string, double>


{{"Users", [Link]}};

// Send the exception telemetry:


[Link](ex, properties, measurements);
}

The Fabrikam Bank team has evolved the practice of always sending telemetry on an exception, unless there's an
obvious recovery.
In fact, their strategy is even broader than that: They send telemetry in every case where the customer is frustrated
in what they wanted to do, whether it corresponds to an exception in the code or not. For example, if the external
inter-bank transfer system returns a "can't complete this transaction" message for some operational reason (no
fault of the customer) then they track that event.
var successCode = AttemptTransfer(transferAmount, ...);
if (successCode < 0)
{
var properties = new Dictionary <string, string>
{{ "Code", returnCode, ... }};
var measurements = new Dictionary <string, double>
{{"Value", transferAmount}};
[Link]("transfer failed", properties, measurements);
}

TrackException is used to report exceptions because it sends a copy of the stack. TrackEvent is used to report other
events. You can attach any properties that might be useful in diagnosis.
Exceptions and events show up in the Diagnostic Search blade. You can drill into them to see the additional
properties and stack trace.

Monitor proactively
Marcela doesn't just sit around waiting for alerts. Soon after every redeployment, she takes a look at response
times - both the overall figure and the table of slowest requests, as well as exception counts.
She can assess the performance effect of every deployment, typically comparing each week with the last. If there's
a sudden worsening, she raises that with the relevant developers.

Triage issues
Triage - assessing the severity and extent of a problem - is the first step after detection. Should we call out the team
at midnight? Or can it be left until the next convenient gap in the backlog? There are some key questions in triage.
How often is it happening? The charts on the Overview blade give some perspective to a problem. For example, the
Fabrikam application generated four web test alerts one night. Looking at the chart in the morning, the team could
see that there were indeed some red dots, though still most of the tests were green. Drilling into the availability
chart, it was clear that all of these intermittent problems were from one test location. This was obviously a network
issue affecting only one route, and would most likely clear itself.
By contrast, a dramatic and stable rise in the graph of exception counts or response times is obviously something
to panic about.
A useful triage tactic is Try It Yourself. If you run into the same problem, you know it's real.
What fraction of users are affected? To obtain a rough answer, divide the failure rate by the session count.

When there are slow responses, compare the table of slowest-responding requests with the usage frequency of
each page.
How important is the blocked scenario? If this is a functional problem blocking a particular user story, does it
matter much? If customers can't pay their bills, this is serious; if they can't change their screen color preferences,
maybe it can wait. The detail of the event or exception, or the identity of the slow page, tells you where customers
are having trouble.

Diagnose issues
Diagnosis isn't quite the same as debugging. Before you start tracing through the code, you should have a rough
idea of why, where and when the issue is occurring.
When does it happen? The historical view provided by the event and metric charts makes it easy to correlate
effects with possible causes. If there are intermittent peaks in response time or exception rates, look at the request
count: if it peaks at the same time, then it looks like a resource problem. Do you need to assign more CPU or
memory? Or is it a dependency that can't manage the load?
Is it us? If you have a sudden drop in performance of a particular type of request - for example when the customer
wants an account statement - then there's a possibility it might be an external subsystem rather than your web
application. In Metrics Explorer, select the Dependency Failure rate and Dependency Duration rates and compare
their histories over the past few hours or days with the problem you detected. If there are correlating changes, then
an external subsystem might be to blame.

Some slow dependency issues are geolocation problems. Fabrikam Bank uses Azure virtual machines, and
discovered that they had inadvertently located their web server and account server in different countries. A
dramatic improvement was brought about by migrating one of them.
What did we do? If the issue doesn't appear to be in a dependency, and if it wasn't always there, it's probably
caused by a recent change. The historical perspective provided by the metric and event charts makes it easy to
correlate any sudden changes with deployments. That narrows down the search for the problem. To identify which
lines in the application code slowed down the performance, enable Application Insights Profiler. Please refer to
Profiling live Azure web apps with Application Insights. After the Profiler is enabled, you will see a trace similar to
the following. In this example, it's easily noticeable that the method GetStorageTableData caused the problem.
What's going on? Some problems occur only rarely and can be difficult to track down by testing offline. All we
can do is to try to capture the bug when it occurs live. You can inspect the stack dumps in exception reports. In
addition, you can write tracing calls, either with your favorite logging framework or with TrackTrace() or
TrackEvent().
Fabrikam had an intermittent problem with inter-account transfers, but only with certain account types. To
understand better what was happening, they inserted TrackTrace() calls at key points in the code, attaching the
account type as a property to each call. That made it easy to filter out just those traces in Diagnostic Search. They
also attached parameter values as properties and measures to the trace calls.

Respond to discovered issues


Once you've diagnosed the issue, you can make a plan to fix it. Maybe you need to roll back a recent change, or
maybe you can just go ahead and fix it. Once the fix is done, Application Insights tells you whether you succeeded.
Fabrikam Bank's development team take a more structured approach to performance measurement than they
used to before they used Application Insights.
They set performance targets in terms of specific measures in the Application Insights overview page.
They design performance measures into the application from the start, such as the metrics that measure user
progress through 'funnels.'

Monitor user activity


When response time is consistently good and there are few exceptions, the dev team can move on to usability.
They can think about how to improve the users' experience, and how to encourage more users to achieve the
desired goals.
Application Insights can also be used to learn what users do with an app. Once it's running smoothly, the team
would like to know which features are the most popular, what users like or have difficulty with, and how often they
come back. That will help them prioritize their upcoming work. And they can plan to measure the success of each
feature as part of the development cycle.
For example, a typical user journey through the web site has a clear "funnel." Many customers look at the rates of
different types of loan. A smaller number go on to fill in the quotation form. Of those who get a quotation, a few go
ahead and take out the loan.

By considering where the greatest numbers of customers drop out, the business can work out how to get more
users through to the bottom of the funnel. In some cases, there might be a user experience (UX) failure - for
example, the 'next' button is hard to find, or the instructions aren't obvious. More likely, there are more significant
business reasons for drop-outs: maybe the loan rates are too high.
Whatever the reasons, the data helps the team work out what users are doing. More tracking calls can be inserted
to work out more detail. TrackEvent() can be used to count any user actions, from the fine detail of individual
button clicks, to significant achievements such as paying off a loan.
The team is getting used to having information about user activity. Nowadays, whenever they design a new
feature, they work out how they will get feedback about its usage. They design tracking calls into the feature from
the start. They use the feedback to improve the feature in each development cycle.
Read more about tracking usage.

Apply the DevOps cycle


So that's how one team use Application Insights not just to fix individual issues, but to improve their development
lifecycle. I hope it has given you some ideas about how Application Insights can help you with application
performance management in your own applications.

Video

Next steps
You can get started in several ways, depending on the characteristics of your application. Pick what suits you best:
[Link] web application
Java web application
[Link] web application
Already deployed apps, hosted on IIS, J2EE, or Azure.
Web pages - Single Page App or ordinary web page - use this on its own or in addition to any of the server
options.
Availability tests to test your app from the public internet.
Start monitoring your [Link] Web Application
11/1/2017 • 2 min to read • Edit Online

With Azure Application Insights, you can easily monitor your web application for availability, performance, and
usage. You can also quickly identify and diagnose errors in your application without waiting for a user to report
them. With the information that you collect from Application Insights about the performance and effectiveness of
your app, you can make informed choices to maintain and improve your application.
This quickstart shows how to add Application Insights to an existing [Link] web application and start analyzing
live statistics, which is just one of the various methods you can use to analyze your application. If you do not have a
[Link] web application, you can create one following the Create a [Link] Web App quickstart.

Prerequisites
To complete this quickstart:
Install Visual Studio 2017 with the following workloads:
[Link] and web development
Azure development
If you don't have an Azure subscription, create a free account before you begin.

Enable Application Insights


1. Open your project in Visual Studio 2017.
2. Select Configure Application Insights from the Project menu. Visual Studio adds the Application Insights SDK
to your application.
3. Click Start Free, select your preferred billing plan, and click Register.

4. Run your application by either selecting Start Debugging from the Debug menu or by pressing the F5 key.
Confirm app configuration
Application Insights gathers telemetry data for your application regardless of where it's running. Use the following
steps to start viewing this data.
1. Open Application Insights by clicking Project -> Application Insights -> Search Debug Session
Telemetry. You see the telemetry from your current session.

2. Click on the first request in the list (GET Home/Index in this example) to see the request details. Notice that
the status code and response time are both included along with other valuable information about the
request.

Start monitoring in the Azure portal


You can now open Application Insights in the Azure portal to view various details about your running application.
1. Right-click on Connected Services Application Insights folder in Solution Explorer and click Open
Application Insights Portal. You see some information about your application and a variety of options.
2. Click on App map to get a visual layout of the dependency relationships between your application
components. Each component shows KPIs such as load, performance, failures, and alerts.

3. Click on the App Analytics icon on one of the application components. This opens Application Insights
Analytics, which provides a rich query language for analyzing all data collected by Application Insights. In
this case, a query is generated for you that renders the request count as a chart. You can write your own
queries to analyze other data.
4. Return to the Overview page and click on Live Stream. This shows live statistics about your application as
it's running. This includes such information as the number of incoming requests, the duration of those
requests, and any failures that occur. You can also inspect critical performance metrics such as processor and
memory.

If you are ready to host your application in Azure, you can publish it now. Follow the steps described in Create an
[Link] Web App Quickstart.

Next steps
In this quick start, you’ve enabled your application for monitoring by Azure Application Insights. Continue to the
tutorials to learn how to use it to monitor statistics and detect issues in your application.
Azure Application Insights tutorials
Start Monitoring Your [Link] Core Web Application
12/7/2017 • 3 min to read • Edit Online

With Azure Application Insights, you can easily monitor your web application for availability, performance, and
usage. You can also quickly identify and diagnose errors in your application without waiting for a user to report
them.
This quickstart guides you through adding the Application Insights SDK to an existing [Link] Core web
application.

Prerequisites
To complete this quickstart:
Install Visual Studio 2017 with the following workloads:
[Link] and web development
Azure development
Install .NET Core 2.0 SDK
You will need an Azure subscription and an existing .NET Core web application.
If you don't have a [Link] Core web application, you can create one by following the Create an [Link] Core
Web App Guide.
If you don't have an Azure subscription, create a free account before you begin.

Log in to the Azure portal


Log in to the Azure portal.

Enable Application Insights


Application Insights can gather telemetry data from any internet-connected application, regardless of whether it's
running on-premises or in the cloud. Use the following steps to start viewing this data.
1. Select New > Monitoring + Management > Application Insights.
A configuration box will appear, use the table below to fill out the input fields.

SETTINGS VALUE DESCRIPTION

Name Globally Unique Value Name that identifies the app you are
monitoring

Application Type [Link] web application Type of app you are monitoring

Resource Group myResourceGroup Name for the new resource group to


host App Insights data

Location East US Choose a location near you, or near


where your app is hosted

2. Click Create.

Configure App Insights SDK


1. Open your [Link] Core Web App project in Visual Studio > Right-click on the AppName in the Solution
Explorer > Select Add > Application Insights Telemetry.
2. Click the Start Free button > Select the Existing resource you created in the Azure portal > Click Register.
3. Select Debug > Start without Debugging (Ctrl+F5) to Launch your app

NOTE
It takes 3-5 minutes before data begins appearing in the portal. If this app is a low-traffic test app, keep in mind that most
metrics are only captured when there are active requests or operations.

Start monitoring in the Azure portal


1. You can now reopen the Application Insights Overview page in the Azure portal by selecting Project >
Application Insights > Open Application Insights Portal, to view details about your currently running
application.

2. Click App map for a visual layout of the dependency relationships between your application components.
Each component shows KPIs such as load, performance, failures, and alerts.
3. Click on the App Analytics icon . This opens Application Insights Analytics, which provides a rich
query language for analyzing all data collected by Application Insights. In this case, a query is generated for
you that renders the request count as a chart. You can write your own queries to analyze other data.

4. Return to the Overview page and examine the Health Overview timeline. This dashboard provides
statistics about your application health, including the number of incoming requests, the duration of those
requests, and any failures that occur.
To enable the Page View Load Time chart to populate with client-side telemetry data, add this script to
each page that you want to track:

<!--
To collect end-user usage analytics about your application,
insert the following script into each page you want to track.
Place this code immediately before the closing </head> tag,
and before any other scripts. Your first data will appear
automatically in just a few seconds.
-->
<script type="text/javascript">
var appInsights=[Link]||function(config){
function i(config){t[config]=function(){var i=arguments;[Link](function()
{t[config].apply(t,i)})}}var t=
{config:config},u=document,e=window,o="script",s="AuthenticatedUserContext",h="start",c="stop",l="Track"
,a=l+"Event",v=l+"Page",y=[Link](o),r,f;[Link]=[Link]||"[Link]
ts/a/[Link]";[Link](o)[0].[Link](y);try{[Link]=[Link]}catch(p)
{}for([Link]=[],[Link]="1.0",r=
["Event","Exception","Metric","PageView","Trace","Dependency"];[Link];)i("track"+[Link]());return
i("set"+s),i("clear"+s),i(h+a),i(c+a),i(h+v),i(c+v),i("flush"),[Link]||
(r="onerror",i("_"+r),f=e[r],e[r]=function(config,i,u,e,o){var s=f&&f(config,i,u,e,o);return
s!==!0&&t["_"+r](config,i,u,e,o),s}),t
}({
instrumentationKey:"<insert instrumentation key>"
});

[Link]=appInsights;
[Link]();
</script>

5. Click on Browser from under the Investigate header. Here you find metrics related to the performance of
your app's pages . You can click Add new chart to create additional custom views or select Edit to modify
the existing chart types, height, color palette, groupings, and metrics.
Clean up resources
If you plan to continue on to work with subsequent quickstarts or with the tutorials, do not clean up the resources
created in this quick start. If you do not plan to continue, use the following steps to delete all resources created by
this quick start in the Azure portal.
1. From the left-hand menu in the Azure portal, click Resource groups and then click myResourceGroup.
2. On your resource group page, click Delete, type myResourceGroup in the text box, and then click Delete.

Next steps
Find and diagnose run-time exceptions
Start Monitoring Your [Link] Web Application
12/12/2017 • 3 min to read • Edit Online

With Azure Application Insights, you can easily monitor your web application for availability, performance, and
usage. You can also quickly identify and diagnose errors in your application without waiting for a user to report
them. With the version 0.20 SDK release onward, you can monitor common third-party packages, including
MongoDB, MySQL, and Redis.
This quickstart guides you through adding the version 0.22 Application Insights SDK for [Link] to an existing
[Link] web application.

Prerequisites
To complete this quickstart:
You need an Azure Subscription and an existing [Link] web application.
If you don't have a [Link] web application, you can create one by following the Create a [Link] web app
quickstart.
If you don't have an Azure subscription, create a free account before you begin.

Log in to the Azure portal


Log in to the Azure portal.

Enable Application Insights


Application Insights can gather telemetry data from any internet-connected application, regardless of whether it's
running on-premises or in the cloud. Use the following steps to start viewing this data.
1. Select New > Monitoring + Management > Application Insights.
A configuration box will appear, use the table below to fill out the input fields.

SETTINGS VALUE DESCRIPTION

Name Globally Unique Value Name that identifies the app you are
monitoring

Application Type [Link] Application Type of app you are monitoring

Resource Group myResourceGroup Name for the new resource group to


host App Insights data

Location East US Choose a location near you, or near


where your app is hosted

2. Click Create.

Configure App Insights SDK


1. Select Overview > Essentials > Copy your application's Instrumentation Key.

2. Add the Application Insights SDK for [Link] to your application. From your app's root folder run:

npm install applicationinsights --save

3. Edit your app's first .js file and add the two lines below to the topmost part of your script. If you are using the
[Link] quickstart app, you would modify the [Link] file. Replace <instrumentation_key> with your
application's instrumentation key.

const appInsights = require('applicationinsights');


[Link]('<instrumentation_key>').start();

4. Restart your app.

NOTE
It takes 3-5 minutes before data begins appearing in the portal. If this app is a low-traffic test app, keep in mind that most
metrics are only captured when there are active requests or operations occurring.

Start monitoring in the Azure portal


1. You can now reopen the Application Insights Overview page in the Azure portal, where you retrieved your
instrumentation key, to view details about your currently running application.

2. Click App map for a visual layout of the dependency relationships between your application components.
Each component shows KPIs such as load, performance, failures, and alerts.

3. Click on the App Analytics icon . This opens Application Insights Analytics, which provides a rich
query language for analyzing all data collected by Application Insights. In this case, a query is generated for
you that renders the request count as a chart. You can write your own queries to analyze other data.
4. Return to the Overview page and examine the Health Overview timeline. This dashboard provides
statistics about your application health, including the number of incoming requests, the duration of those
requests, and any failures that occur.

To enable the Page View Load Time chart to populate with client-side telemetry data, add this script to
each page that you want to track:
<!--
To collect end-user usage analytics about your application,
insert the following script into each page you want to track.
Place this code immediately before the closing </head> tag,
and before any other scripts. Your first data will appear
automatically in just a few seconds.
-->
<script type="text/javascript">
var appInsights=[Link]||function(config){
function i(config){t[config]=function(){var i=arguments;[Link](function()
{t[config].apply(t,i)})}}var t=
{config:config},u=document,e=window,o="script",s="AuthenticatedUserContext",h="start",c="stop",l="Track"
,a=l+"Event",v=l+"Page",y=[Link](o),r,f;[Link]=[Link]||"[Link]
ts/a/[Link]";[Link](o)[0].[Link](y);try{[Link]=[Link]}catch(p)
{}for([Link]=[],[Link]="1.0",r=
["Event","Exception","Metric","PageView","Trace","Dependency"];[Link];)i("track"+[Link]());return
i("set"+s),i("clear"+s),i(h+a),i(c+a),i(h+v),i(c+v),i("flush"),[Link]||
(r="onerror",i("_"+r),f=e[r],e[r]=function(config,i,u,e,o){var s=f&&f(config,i,u,e,o);return
s!==!0&&t["_"+r](config,i,u,e,o),s}),t
}({
instrumentationKey:"<insert instrumentation key>"
});

[Link]=appInsights;
[Link]();
</script>

5. Click on Browser from under the Investigate header. Here you find metrics related to the performance of
your app's pages. You can click Add new chart to create additional custom views or select Edit to modify
the existing chart types, height, color palette, groupings, and metrics.
To learn more about monitoring [Link], check out the additional App Insights [Link] documentation.

Clean up resources
If you plan to continue on to work with subsequent quickstarts or with the tutorials, do not clean up the resources
created in this quick start. If you do not plan to continue, use the following steps to delete all resources created by
this quick start in the Azure portal.
1. From the left-hand menu in the Azure portal, click Resource groups and then click myResourceGroup.
2. On your resource group page, click Delete, type myResourceGroup in the text box, and then click Delete.

Next steps
Find and diagnose performance problems
Start Monitoring Your Java Web Application
12/7/2017 • 4 min to read • Edit Online

With Azure Application Insights, you can easily monitor your web application for availability, performance, and
usage. You can also quickly identify and diagnose errors in your application without waiting for a user to report
them. With the Application Insights Java SDK, you can monitor common third-party packages including MongoDB,
MySQL, and Redis.
This quickstart guides you through adding the Application Insights SDK to an existing Java Dynamic Web Project.

Prerequisites
To complete this quickstart:
Install Oracle JRE 1.6 or later, or Zulu JRE 1.6 or later
Install Free Eclipse IDE for Java EE Developers. This quickstart uses Eclipse Oxygen (4.7)
You will need an Azure Subscription and an existing Java Dynamic Web Project
If you don't have a Java Dynamic Web Project, you can create one with the Create a Java web app quickstart.
If you don't have an Azure subscription, create a free account before you begin.

Log in to the Azure portal


Log in to the Azure portal.

Enable Application Insights


Application Insights can gather telemetry data from any internet-connected application, regardless of whether it's
running on-premises or in the cloud. Use the following steps to start viewing this data.
1. Select New > Monitoring + Management > Application Insights.
A configuration box will appear, use the table below to fill out the input fields.

SETTINGS VALUE DESCRIPTION

Name Globally Unique Value Name that identifies the app you are
monitoring

Application Type Java web application Type of app you are monitoring

Resource Group myResourceGroup Name for the new resource group to


host App Insights data

Location East US Choose a location near you, or near


where your app is hosted

2. Click Create.

Install App Insights Plugin


1. Launch Eclipse > Click Help > Select Install New Software.

2. Copy [Link] into the "Work With" field > Check Azure Toolkit for Java > Select
Application Insights Plugin for Java > Uncheck "Contact all update sites during install to find required
software."
3. Once the installation is complete, you will be prompted to Restart Eclipse.

Configure App Insights Plugin


1. Launch Eclipse > Open your Project > Right-click the project name in the Project Explorer > Select Azure
> Click Sign In.
2. Select Authentication Method Interactive > Click Sign In > When prompted enter your Azure credentials
> Select Your Azure Subscription.
3. Right-click your project name in Project Explorer > Select Azure > Click Configure Application Insights.
4. Check Enable telemetry with Application Insights > Select the App Insights resource and associated
Instrumentation Key you want to link to your Java app.
NOTE
The Application Insights SDK for Java is capable of capturing and visualizing live metrics, but when you first enable telemetry
collection it can take a few minutes before data begins appearing in the portal. If this app is a low-traffic test app, keep in
mind that most metrics are only captured when there are active requests or operations.

Start monitoring in the Azure portal


1. You can now reopen the Application Insights Overview page in the Azure portal, where you retrieved your
instrumentation key, to view details about your currently running application.

2. Click App map for a visual layout of the dependency relationships between your application components.
Each component shows KPIs such as load, performance, failures, and alerts.
3. Click on the App Analytics icon . This opens Application Insights Analytics, which provides a rich
query language for analyzing all data collected by Application Insights. In this case, a query is generated for
you that renders the request count as a chart. You can write your own queries to analyze other data.

4. Return to the Overview page and examine the Health Overview timeline. This dashboard provides
statistics about your application health, including the number of incoming requests, the duration of those
requests, and any failures that occur.
To enable the Page View Load Time chart to populate with client-side telemetry data, add this script to
each page that you want to track:

<!--
To collect end-user usage analytics about your application,
insert the following script into each page you want to track.
Place this code immediately before the closing </head> tag,
and before any other scripts. Your first data will appear
automatically in just a few seconds.
-->
<script type="text/javascript">
var appInsights=[Link]||function(config){
function i(config){t[config]=function(){var i=arguments;[Link](function()
{t[config].apply(t,i)})}}var t=
{config:config},u=document,e=window,o="script",s="AuthenticatedUserContext",h="start",c="stop",l="Track"
,a=l+"Event",v=l+"Page",y=[Link](o),r,f;[Link]=[Link]||"[Link]
ts/a/[Link]";[Link](o)[0].[Link](y);try{[Link]=[Link]}catch(p)
{}for([Link]=[],[Link]="1.0",r=
["Event","Exception","Metric","PageView","Trace","Dependency"];[Link];)i("track"+[Link]());return
i("set"+s),i("clear"+s),i(h+a),i(c+a),i(h+v),i(c+v),i("flush"),[Link]||
(r="onerror",i("_"+r),f=e[r],e[r]=function(config,i,u,e,o){var s=f&&f(config,i,u,e,o);return
s!==!0&&t["_"+r](config,i,u,e,o),s}),t
}({
instrumentationKey:"<instrumentation key>"
});

[Link]=appInsights;
[Link]();
</script>

5. Click on Live Stream. Here you find live metrics related to the performance of your Java web app. Live
Metrics Stream includes data regarding the number of incoming requests, the duration of those requests,
and any failures that occur. You can also monitor critical performance metrics, such as processor and
memory in real-time.
To learn more about monitoring Java, check out the additional App Insights Java documentation.

Clean up resources
If you plan to continue on to work with subsequent quickstarts or with the tutorials, do not clean up the resources
created in this quick start. If you do not plan to continue, use the following steps to delete all resources created by
this quick start in the Azure portal.
1. From the left-hand menu in the Azure portal, click Resource groups and then click myResourceGroup.
2. On your resource group page, click Delete, type myResourceGroup in the text box, and then click Delete.

Next steps
Find and diagnose performance problems
Start analyzing your mobile app with App Center
and Application Insights
11/15/2017 • 6 min to read • Edit Online

This quickstart guides you through connecting your app's App Center instance to Application Insights. With
Application Insights, you can query, segment, filter, and analyze your telemetry with more powerful tools than are
available from the Analytics service of App Center.

Prerequisites
To complete this quickstart, you need:
An Azure subscription.
An iOS, Android, Xamarin, Universal Windows, or React Native app.
If you don't have an Azure subscription, create a free account before you begin.

Onboard to App Center


Before you can use Application Insights with your mobile app, you need to onboard your app to App Center.
Application Insights does not receive telemetry from your mobile app directly. Instead, your app sends custom
event telemetry to App Center. Then, App Center continuously exports copies of these custom events into
Application Insights as the events are received.
To onboard your app, follow the App Center quickstart for each platform your app supports. Create separate App
Center instances for each platform:
iOS.
Android.
Xamarin.
Universal Windows.
React Native.

Track events in your app


After your app is onboarded to App Center, it needs to be modified to send custom event telemetry using the App
Center SDK. Custom events are the only type of App Center telemetry that is exported to Application Insights.
To send custom events from iOS apps, use the trackEvent or trackEvent:withProperties methods in the App
Center SDK. Learn more about tracking events from iOS apps.

[Link]("Video clicked")

To send custom events from Android apps, use the trackEvent method in the App Center SDK. Learn more about
tracking events from Android apps.

[Link]("Video clicked")

To send custom events from other app platforms, use the trackEvent methods in their App Center SDKs.
To make sure your custom events are being received, go to the Events tab under the Analytics section in App
Center. It can take a couple minutes for events to show up from when they're sent from your app.

Create an Application Insights resource


Once your app is sending custom events and these events are being received by App Center, you need to create an
App Center-type Application Insights resource in the Azure portal:
1. Log in to the Azure portal.
2. Select New > Monitoring + Management > Application Insights.

A configuration box will appear. Use the table below to fill out the input fields.

SETTINGS VALUE DESCRIPTION

Name Some globally unique value, like Name that identifies the app you are
"myApp-iOS" monitoring

Application Type App Center application Type of app you are monitoring

Resource Group A new resource group, or an existing The resource group in which to
one from the menu create the new Application Insights
resource

Location A location from the menu Choose a location near you, or near
where your app is hosted

3. Click Create.
If your app supports multiple platforms (iOS, Android, etc.), it's best to create separate Application Insights
resources, one for each platform.

Export to Application Insights


In your new Application Insights resource on the Overview page in the Essentials section at the top, copy the
instrumentation key for this resource.
In the App Center instance for your app:
1. On the Settings page, click Export.
2. Choose New Export, pick Application Insights, then click Customize.
3. Paste your Application Insights instrumentation key into the box.
4. Consent to increasing the usage of the Azure subscription containing your Application Insights resource. Each
Application Insights resource is free for the first 1 GB of data received per month. Learn more about Application
Insights pricing.
Remember to repeat this process for each platform your app supports.
Once export is set up, each custom event received by App Center is copied into Application Insights. It can take
several minutes for events to reach Application Insights, so if they don't show up immediately, wait a bit before
diagnosing further.
To give you more data when you first connect, the most recent 48 hours of custom events in App Center are
automatically exported to Application Insights.

Start monitoring your app


Application Insights can query, segment, filter, and analyze the custom event telemetry from your apps, beyond the
analytics tools App Center provides.
1. Query your custom event telemetry. From the Application Insights Overview page, choose Analytics.

The Application Insights Analytics portal associated with your Application Insights resource will open. The
Analytics portal lets you directly query your data using the Log Analytics query language, so you can ask
arbitrarily complex questions about your app and its users.
Open a new tab in the Analytics portal, then paste in the following query. It returns a count of how many
distinct users have sent each custom event from your app in the last 24 hours, sorted by these distinct
counts.

customEvents
| where timestamp >= ago(24h)
| summarize dcount(user_Id) by name
| order by dcount_user_Id desc
a. Select the query by clicking anywhere on the query in the text editor.
b. Then click Go to run the query.
Learn more about Application Insights Analytics and the Log Analytics query language.
2. Segment and filter your custom event telemetry. From the Application Insights Overview page,
choose Users in the table of contents.

The Users tool shows how many users of your app clicked certain buttons, visited certain screens, or
performed any other action that you are tracking as an event with the App Center SDK. If you've been
looking for a way to segment and filter your App Center events, the Users tool is a great choice.
For example, segment your usage by geography by choosing Country or region in the Split by dropdown
menu.
3. Analyze conversion, retention, and navigation patterns in your app. From the Application Insights
Overview page, choose User Flows in the table of contents.

The User Flows tool visualizes which events users send after some starting event. It's useful for getting an
overall picture of how users navigate through your app. It can also reveal places where many users are
churning from your app, or repeating the same actions over and over.
In addition to User Flows, Application Insights has several other usage analytics tools to answer specific
questions:
Funnels for analyzing and monitoring conversion rates.
Retention for analyzing how well your app retains users over time.
Workbooks for combining visualizations and text into a shareable report.
Cohorts for naming and saving specific groups of users or events so they can be easily referenced from
other analytics tools.

Clean up resources
If you do not want to continue using Application Insights with App Center, turn off export in App Center and delete
the Application Insights resource. This will prevent you from being charged further by Application Insights for this
resource.
To turn off export in App Center:
1. In App Center, go to Settings and choose Export.
2. Click the Application Insights export you want to delete, then click Delete export at the bottom and confirm.
To delete the Application Insights resource:
1. In the left-hand menu of the Azure portal, click Resource groups and then choose the resource group in which
your Application Insights resource was created.
2. Open the Application Insights resource you want to delete. Then click Delete in the top menu of the resource
and confirm. This will permanently delete the copy of the data that was exported to Application Insights.

Next steps
Understand how customers are using your app
Find and diagnose run-time exceptions with Azure
Application Insights
12/7/2017 • 4 min to read • Edit Online

Azure Application Insights collects telemetry from your application to help identify and diagnose run-time
exceptions. This tutorial takes you through this process with your application. You learn how to:
Modify your project to enable exception tracking
Identify exceptions for different components of your application
View details of an exception
Download a snapshot of the exception to Visual Studio for debugging
Analyze details of failed requests using query language
Create a new work item to correct the faulty code

Prerequisites
To complete this tutorial:
Install Visual Studio 2017 with the following workloads:
[Link] and web development
Azure development
Download and install the Visual Studio Snapshot Debugger.
Enable Visual Studio Snapshot Debugger
Deploy a .NET application to Azure and enable the Application Insights SDK.
The tutorial tracks the identification of an exception in your application, so modify your code in your
development or test environment to generate an exception.

Log in to Azure
Log in to the Azure portal at [Link]

Analyze failures
Application Insights collects any failures in your application and lets you view their frequency across different
operations to help you focus your efforts on those with the highest impact. You can then drill down on details of
these failures to identify root cause.
1. Select Application Insights and then your subscription.
2. To open the Failures panel either select Failures under the Investigate menu or click the Failed requests
graph.
3. The Failed requests panel shows the count of failed requests and the number of users affected for each
operation for the application. By sorting this information by user you can identify those failures that most
impact users. In this example, the GET Employees/Create and GET Customers/Details are likely
candidates to investigate because of their large number of failures and impacted users. Selecting an
operation shows further information about this operation in the right panel.

4. Reduce the time window to zoom in on the period where the failure rate shows a spike.
5. Click View Details to see the details for the operation. This includes a Gantt chart that shows two failed
dependencies which collectively took almost half of a second to complete. You can find out more about
analyzing performance issues by completing the tutorial Find and diagnose performance issues with Azure
Application Insights.

6. The operations detail also shows a FormatException which appears to have caused the failure. Click the
exception or on the Top 3 exception types count to view its details. You can see that it's due to an invalid
zip code.
Identify failing code
The Snapshot Debugger collects snapshots of the most frequent exceptions in your application to assist you in
diagnosing its root cause in production. You can view debug snapshots in the portal to see the call stack and
inspect variables at each call stack frame. You can then debug the source code by downloading the snapshot and
opening it in Visual Studio 2017.
1. In the properties of the exception, click Open debug snapshot.
2. The Debug Snapshot panel opens with the call stack for the request. Click any method to view the values of
all local variables at the time of the request. Starting from the top method in this example, we can see local
variables that have no value.
3. The first call that has valid values is ValidZipCode, and we can see that a zip code was provided with letters
that isn't able to be translated into an integer. This appears to be the error in the code that needs to be
corrected.

4. To download this snapshot into Visual Studio where we can locate the actual code that needs to be
corrected, click Download Snapshot.
5. The snapshot is loaded into Visual Studio.
6. You can now run a debug session in Visual Studio that quickly identifies the line of code that caused the
exception.
Use analytics data
All data collected by Application Insights is stored in Azure Log Analytics, which provides a rich query language that
allows you to analyze the data in a variety of ways. We can use this data to analyze the requests that generated the
exception we're researching.
1. Click the CodeLens information above the code to view telemetry provided by Application Insights.

2. Click Analyze impact to open Application Insights Analytics. It's populated with several queries that
provide details on failed requests such as impacted users, browsers, and regions.
Add work item
If you connect Application Insights to a tracking system such as Visual Studio Team Services or GitHub, you can
create a work item directly from Application Insights.
1. Return to the Exception Properties panel in Application Insights.
2. Click New Work Item.
3. The New Work Item panel opens with details about the exception already populated. You can add any
additional information before saving it.

Next steps
Now that you've learned how to identify run-time exceptions, advance to the next tutorial to learn how to identify
and diagnose performance issues.
Identify performance issues
Find and diagnose performance issues with Azure
Application Insights
11/2/2017 • 5 min to read • Edit Online

Azure Application Insights collects telemetry from your application to help analyze its operation and performance.
You can use this information to identify problems that may be occurring or to identify improvements to the
application that would most impact users. This tutorial takes you through the process of analyzing the
performance of both the server components of your application and the perspective of the client. You learn how to:
Identify the performance of server-side operations
Analyze server operations to determine the root cause of slow performance
Identify slowest client-side operations
Analyze details of page views using query language

Prerequisites
To complete this tutorial:
Install Visual Studio 2017 with the following workloads:
[Link] and web development
Azure development
Deploy a .NET application to Azure and enable the Application Insights SDK.
Enable the Application Insights profiler for your application.

Log in to Azure
Log in to the Azure portal at [Link]

Identify slow server operations


Application Insights collects performance details for the different operations in your application. By identifying
those operations with the longest duration, you can diagnose potential problems or best target your ongoing
development to improve the overall performance of the application.
1. Select Application Insights and then select your subscription.
2. To open the Performance panel either select Performance under the Investigate menu or click the
Server Response Time graph.
3. The Performance panel shows the count and average duration of each operation for the application. You
can use this information to identify those operations that most impact users. In this example, the GET
Customers/Details and GET Home/Index are likely candidates to investigate because of their relatively
high duration and number of calls. Other operations may have a higher duration but were rarely called, so
the effect of their improvement would be minimal.

4. The graph currently shows the average duration of all operations over time. Add the operations that you're
interested in by pinning them to the graph. This shows that there are some peaks worth investigating.
Isolate this further by reducing the time window of the graph.
5. Click an operation to view its performance panel on the right. This shows the distribution of durations for
different requests. Users typically notice slow performance at about half a second, so reduce the window to
requests over 500 milliseconds.

6. In this example, you can see that a significant number of requests are taking over a second to process. You
can see the details of this operation by clicking on Operation details.
7. The information that you've gathered so far only confirms that there is slow performance, but it does little
to get to the root cause. The Profiler helps with this by showing the actual code that ran for the operation
and the time required for each step. Some operations may not have a trace since the profiler runs
periodically. Over time, more operations should have traces. To start the profiler for the operation, click
Profiler traces.
8. The trace shows the individual events for each operation so you can diagnose the root cause for the duration of
the overall operation. Click one of the top examples, which have the longest duration.
9. Click Show Hot Path to highlight the specific path of events that most contribute to the total duration of
the operation. In this example, you can see that the slowest call is from
[Link] method. The part that takes most time is the
[Link] method. If this line of code is executed every time the function gets called,
unnecessary network call and CPU resource will be consumed. The best way to fix your code is to put this
line in some startup method that only execute for once.

10. The Performance Tip at the top of the screen supports the assessment that the excessive duration is due to
waiting. Click the waiting link for documentation on interpreting the different types of events.

11. For further analysis, you can click Download .etl trace to download the trace in to Visual Studio.

Use analytics data for server


Application Insights Analytics provides a rich query language that allows you to analyze all data collected by
Application Insights. You can use this to perform deep analysis on request and performance data.
1. Return to the operation detail panel and click the Analytics button.
2. Application Insights Analytics opens with a query for each of the views in the panel. You can run these
queries as they are or modify them for your requirements. The first query shows the duration for this
operation over time.

Identify slow client operations


In addition to identifying server processes to optimize, Application Insights can analyze the perspective of client
browsers. This can help you identify potential improvements to client components and even identify issues with
different browsers or different locations.
1. Select Browser under Investigate to open the browser summary. This provides a visual summary of
various telemetries of your application from the perspective of the browser.
2. Scroll down to What are my slowest pages?. This shows a list of the pages in your application that have
taken the longest time for clients to load. You can use this information to prioritize those pages that have
the most significant impact on the user.
3. Click one of the pages to open the Page view panel. In the example, the /FabrikamProd page is showing
an excessive average duration. The Page view panel provides details about this page including a
breakdown of different duration ranges.
4. Click the highest duration to inspect details of these requests. Then click the individual request to view
details of the client requesting the page including the type of browser and its location. This information can
assist you in determining whether there are performance issues related to particular types of clients.

Use analytics data for client


Like the data collected for server performance, Application Insights makes all client data available for deep analysis
using Analytics.
1. Return to the browser summary and click the Analytics icon.
2. Application Insights Analytics opens with a query for each of the views in the panel. The first query shows
the duration for different page views over time.

3. Smart Diagnostics is a feature of Application Insights Analytics that identifies unique patterns in the data.
When you click the Smart Diagnostics dot in the line chart, the same query is run without the records that
caused the anomaly. Details of those records are shown in the comment section of the query so you can
identify the properties of those page views that are causing the excessive duration.
Next steps
Now that you've learned how to identify run-time exceptions, advance to the next tutorial to learn how to create
alerts in response to failures.
Alert on application health
Monitor and alert on application health with Azure
Application Insights
11/1/2017 • 3 min to read • Edit Online

Azure Application Insights allows you to monitor your application and send you alerts when it is either unavailable,
experiencing failures, or suffering from performance issues. This tutorial takes you through the process of creating
tests to continuously check the availability of your application and to send different kinds of alerts in response to
detected issues. You learn how to:
Create availability test to continuously check the response of the application
Send mail to administrators when a problem occurs
Create alerts based on performance metrics
Use a Logic App to send summarized telemetry on a schedule.

Prerequisites
To complete this tutorial:
Install Visual Studio 2017 with the following workloads:
[Link] and web development
Azure development
Deploy a .NET application to Azure and enable the Application Insights SDK.

Log in to Azure
Log in to the Azure portal at [Link]

Create availability test


Availability tests in Application Insights allow you to automatically test your application from various locations
around the world. In this tutorial, you will perform a simple test to ensure that the application is available. You
could also create a complete walkthrough to test its detailed operation.
1. Select Application Insights and then select your subscription.
2. Select Availability under the Investigate menu and then click Add test.
3. Type in a name for the test and leave the other defaults. This requests the home page of the application
every 5 minutes from 5 different geographic locations.
4. Select Alerts to open the Alerts panel where you can define details for how to respond if the test fails. Type
in an email address to send when the alert criteria is met. You could optionally type in the address of a
webhook to call when the alert criteria is met.
5. Return to the test panel, and after a few minutes you should start seeing results from the availability test.
Click on the test name to view details from each location. The scatter chart shows the success and duration
of each test.
6. You can drill down in to the details of any particular test by clicking on its dot in the scatter chart. The
example below shows the details for a failed request.

7. If the alert criteria is met, a mail similar to the one below is sent to the address that you specified.

Create an alert from metrics


In addition to sending alerts from an availability test, you can create an alert from any performance metrics that are
being collected for your application.
1. Select Alerts from the Configure menu. This opens the Azure Alerts panel. There may be other alert rules
configured here for other services.
2. Click Add metric alert. This opens the panel to create a new alert rule.

3. Type in a Name for the alert rule, and select your application in the dropdown for Resource.
4. Select a Metric to sample. A graph is displayed to indicate the value of this request over the past 24 hours.
This assists you in setting the condition for the metric.
5. Specify a Condition and Threshold for the alert. This is the number of times that the metric must be
exceeded for an alert to be created.
6. Under Notify via check the Email owners, contributors, and readers box to send a mail to these users
when the alert condition is met and add the email address of any additional recipients. You can also specify
a webhook or a logic app here that runs when the condition is met. These could be used to attempt to
mitigate the detected issue or
Proactively send information
Alerts are created in reaction to a particular set of issues identified in your application, and you typically reserve
alerts for critical conditions requiring immediate attention. You can proactively receive information about your
application with a Logic App that runs automatically on a schedule. For example, you could have a mail sent to
administrators daily with summary information that requires further evaluation.
For details on creating a Logic App with Application Insights, see Automate Application Insights processes by using
Logic Apps

Next steps
Now that you've learned how to alert on issues, advance to the next tutorial to learn how to analyze how users are
interacting with your application.
Understand users
Use Azure Application Insights to understand how
customers are using your application
11/9/2017 • 6 min to read • Edit Online

Azure Application Insights collects usage information to help you understand how your users interact with your
application. This tutorial walks you through the different resources that are available to analyze this information.
You will learn how to:
Analyze details about users accessing your application
Use session information to analyze how customers use your application
Define funnels that let you compare your desired user activity to their actual activity
Create a workbook to consolidate visualizations and queries into a single document
Group similar users to analyze them together
Learn which users are returning to your application
Inspect how users navigate through your application

Prerequisites
To complete this tutorial:
Install Visual Studio 2017 with the following workloads:
[Link] and web development
Azure development
Download and install the Visual Studio Snapshot Debugger.
Deploy a .NET application to Azure and enable the Application Insights SDK.
Send telemetry from your application for adding custom events/page views
Send user context to track what a user does over time and fully utilize the usage features.

Log in to Azure
Log in to the Azure portal at [Link]

Get information about your users


The Users panel allows you to understand important details about your users in a variety of ways. You can use this
panel to understand such information as where your users are connecting from, details of their client, and what
areas of your application they're accessing.
1. Select Application Insights and then select your subscription.
2. Select Users in the menu.
3. The default view shows the number of unique users that have connected to your application over the past
24 hours. You can change the time window and set various other criteria to filter this information.
4. Click the During dropdown and change the time window to 7 days. This increases the data included in the
different charts in the panel.

5. Click the Split by dropdown to add a breakdown by a user property to the graph. Select Country or
region. The graph includes the same data but allows you to view a breakdown of the number of users for
each country.

6. Position the cursor over different bars in the chart and note that the count for each country reflects only the
time window represented by that bar.
7. Have a look at the Insights column at the right that perform analysis on your user data. This provides
information such as the number of unique sessions over the time period and records with common
properties that make up significant of the user data
Analyze user sessions
The Sessions panel is similar to the Users panel. Where Users helps you understand details about the users
accessing your application, Sessions helps you understand how those users used your application.
1. Select Sessions in the menu.
2. Have a look at the graph and note that you have the same options to filter and break down the data as in the
Users panel.

3. The Sample of these sessions pane on the right lists sessions that include a large number of events. These
are interesting sessions to analyze.
4. Click on one of the sessions to view its Session Timeline, which shows every action in the sessions. This
can help you identify information such as the sessions with a large number of exceptions.

Group together similar users


A Cohort is a set of users groupd on similar characteristics. You can use cohorts to filter data in other panels
allowing you to analyze particular groups of users. For example, you might want to analyze only users who
completed a purchase.
1. Select Cohorts in the menu.
2. Click New to create a new cohort.
3. Select the Who used dropdown and select an action. Only users who performed this action within the time
window of the report will be included.

4. Select Users in the menu.


5. In the Show dropdown, select the cohort you just created. The data for the graph is limited to those users.

Compare desired activity to reality


While the previous panels are focused on what users of your application did, Funnels focus on what you want
users to do. A funnel represents a set of steps in your application and the percentage of users who move between
steps. For example, you could create a funnel that measures the percentage of users who connect to your
application who search product. You can then see the percentage of users who add that product to a shopping cart,
and then the percentage of those who complete a purchase.
1. Select Funnels in the menu and then click New.
2. Type in a Funnel Name.
3. Create a funnel with at least two steps by selecting an action for each step. The list of actions is built from
usage data collected by Application Insights.

4. Click Save to save the funnel and then view its results. The window to the right of the funnel shows the most
common events before the first activity and after the last activity to help you understand user tendencies
around the particular sequence.
Learn which customers return
Retention helps you understand which users are coming back to your application.
1. Select Retention in the menu.
2. By default, the analyzed information includes users who performed any action and then returned to perform
any action. You can change this filter to any include, for example, only those users who returned after
completing a purchase.

3. The returning users that match the criteria are shown in graphical and table form for different time
durations. The typical pattern is for a gradual drop in returning users over time. A sudden drop from one
time period to the next might raise a concern.
Analyze user navigation
A User flow visualizes how users navigate between the pages and features of your application. This helps you
answer questions such as where users typically move from a particular page, how they typically exit your
application, and if there are any actions that are regularly repeated.
1. Select User flows in the menu.
2. Click New to create a new user flow and then click Edit to edit its details.
3. Increase the Time Range to 7 days and then select an initial event. The flow will track user sessions that
start with that event.
4. The user flow is displayed, and you can see the different user paths and their session counts. Blue lines
indicate an action that the user performed after the current action. A red line indicates the end of the user
session.
5. To remove an event from the flow, click the x in the corner of the action and then click Create Graph. The
graph is redrawn with any instances of that event removed. Click Edit to see that the event is now added to
Excluded events.

Consolidate usage data


Workbooks combine data visualizations, Analytics queries, and text into interactive documents. You can use
workbooks to group together common usage information, consolidate information from a particular incident, or
report back to your team on your application's usage.
1. Select Workbooks in the menu.
2. Click New to create a new workbook.
3. A query is already provided that includes all usage data in the last day displayed as a bar chart. You can use
this query, manually edit it, or click Sample queries to select from other useful queries.
4. Click Done editing.
5. Click Edit in the top pane to edit the text at the top of the workbook. This is formatted using markdown.

6. Click Add users to add a graph with user information. Edit the details of the graph if you want and then click
Done editing to save it.

Next steps
Now that you've learned how to analyze your users, advance to the next tutorial to learn how to create custom
dashboards that combine this information with other useful data about your application.
Create custom dashboards
Create custom KPI dashboards using Azure
Application Insights
11/1/2017 • 6 min to read • Edit Online

You can create multiple dashboards in the Azure portal that each include tiles visualizing data from multiple Azure
resources across different resource groups and subscriptions. You can pin different charts and views from Azure
Application Insights to create custom dashboards that provide you with complete picture of the health and
performance of your application. This tutorial walks you through the creation of a custom dashboard that includes
multiple types of data and visualizations from Azure Application Insights. You learn how to:
Create a custom dashboard in Azure
Add a tile from the Tile Gallery
Add standard metrics in Application Insights to the dashboard
Add a custom metric chart Application Insights to the dashboard
Add the results of an Analytics query to the dashboard

Prerequisites
To complete this tutorial:
Deploy a .NET application to Azure and enable the Application Insights SDK.

Log in to Azure
Log in to the Azure portal at [Link]

Create a new dashboard


A single dashboard can contain resources from multiple applications, resource groups, and subscriptions. Start the
tutorial by creating a new dashboard for your application.
1. On the main screen of the portal, select New dashboard.

2. Type a name for the dashboard.


3. Have a look at the Tile Gallery for a variety of tiles that you can add to your dashboard. In addition to adding
tiles from the gallery you can pin charts and other views directly from Application Insights to the dashboard.
4. Locate the Markdown tile and drag it on to your dashboard. This tile allows you to add text formatted in
markdown which is ideal for adding descriptive text to your dashboard.
5. Add text to the tile's properties and resize it on the dashboard canvas.
6. Click Done customizing at the top of the screen to exit tile customization mode and then Publish changes
to save your changes.

Add health overview


A dashboard with just static text isn't very interesting, so now add a tile from Application Insights to show
information about your application. You can add Application Insights tiles from the Tile Gallery, or you can pin
them directly from Application Insights screens. This allows you to configure charts and views that you're already
familiar with before pinning them to your dashboard. Start by adding the standard health overview for your
application. This requires no configuration and allows minimal customization in the dashboard.
1. Select Application Insights in the Azure menu and then select your application.
2. In the Overview timeline, select the context menu and click Pin to dashboard. This adds the tile to the last
dashboard that you were viewing.
3. At the top of the screen, click View dashboard to return to your dashboard.
4. The Overview timeline is now added to your dashboard. Click and drag it into position and then click Done
customizing and Publish changes. Your dashboard now has a tile with some useful information.

Add custom metric chart


The Metrics panel allows you to graph a metric collected by Application Insights over time with optional filters and
grouping. Like everything else in Application Insights, you can add this chart to the dashboard. This does require
you to do a little customization first.
1. Select Application Insights in the Azure menu and then select your application.
2. Select Metrics.
3. An empty chart has already been created, and you're prompted to add a metric. Add a metric to the chart
and optionally add a filter and a grouping. The example below shows the number of server requests
grouped by success. This gives a running view of successful and unsuccessful requests.
4. Select the context menu for the chart and select Pin to dashboard. This adds the view to the last dashboard
that you were working with.

5. At the top of the screen, click View dashboard to return to your dashboard.
6. The Overview Timeline is now added to your dashboard. Click and drag it into position and then click Done
customizing and then Publish changes.
Metrics Explorer
Metrics Explorer is similar to Metrics although it allows significantly more customization when added to the
dashboard. Which you use to graph your metrics depends on your particular preference and requirements.
1. Select Application Insights in the Azure menu and then select your application.
2. Select Metrics Explorer.
3. Click to edit the chart and select one or more metrics and optionally a detailed configuration. The example
displays a line chart tracking average page response time.
4. Click the pin icon in the top right to add the chart to your dashboard and then drag it into position.

5. The Metrics Explorer tile allows more customization once it's added to the dashboard. Right click the tile and
select Edit title to add a custom title. Go ahead and make other customizations if you want.
6. You now have the Metrics Explorer chart added to your dashboard.

Add Analytics query


Azure Application Insights Analytics provides a rich query language that allows you to analyze all of the data
collected Application Insights. Just like charts and other views, you can add the output of an Analytics query to your
dashboard.
Since Azure Applications Insights Analytics is a separate service, you need to share your dashboard for it to include
an Analytics query. When you share an Azure dashboard, you publish it as an Azure resource which can make it
available to other users and resources.
1. At the top of the dashboard screen, click Share.

2. Keep the Dashboard name the same and select the Subscription Name to share the dashboard. Click
Publish. The dashboard is now available to other services and subscriptions. You can optionally define
specific users who should have access to the dashboard.
3. Select Application Insights in the Azure menu and then select your application.
4. Click Analytics at the top of the screen to open the Analytics portal.

5. Type the following query, which returns the top 10 most requested pages and their request count:

requests
| summarize count() by name
| sort by count_ desc
| take 10

6. Click Go to validate the results of the query.


7. Click the pin icon and select the name of your dashboard. The reason that this option has you select a
dashboard unlike the previous steps where the last dashboard was used is because the Analytics console is a
separate service and needs to select from all available shared dashboards.

8. Before you go back to the dashboard, add another query, but this time render it as a chart so you see the
different ways to visualize an Analytics query in a dashboard. Start with the following query that
summarizes the top 10 operations with the most exceptions.

exceptions
| summarize count() by operation_Name
| sort by count_ desc
| take 10

9. Select Chart and then change to a Doughnut to visualize the output.


10. Click the pin icon to pin the chart to your dashboard and this time select the link to return to your
dashboard.
11. The results of the queries are now added to your dashboard in the format that you selected. Click and drag each
into position and then click Done editing.
12. Right click each of the tiles and select Edit Title to give them a descriptive title.

13. Click Publish changes to commit the changes to your dashboard that now includes a variety of charts and
visualizations from Application Insights.

Next steps
Now that you've learned how to create custom dashboards, have a look at the rest of the Application Insights
documentation including a case study.
Deep diagnostics
Monitor Azure web app performance
11/1/2017 • 3 min to read • Edit Online

In the Azure Portal you can set up application performance monitoring for your Azure web apps. Azure Application
Insights instruments your app to send telemetry about its activities to the Application Insights service, where it is
stored and analyzed. There, metric charts and search tools can be used to help diagnose issues, improve
performance, and assess usage.

Run time or build time


You can configure monitoring by instrumenting the app in either of two ways:
Run-time - You can select a performance monitoring extension when your web app is already live. It isn't
necessary to rebuild or re-install your app. You get a standard set of packages that monitor response times,
success rates, exceptions, dependencies, and so on.
Build time - You can install a package in your app in development. This option is more versatile. In addition to
the same standard packages, you can write code to customize the telemetry or to send your own telemetry.
You can log specific activities or record events according to the semantics of your app domain.

Run time instrumentation with Application Insights


If you're already running a web app in Azure, you already get some monitoring: request and error rates. Add
Application Insights to get more, such as response times, monitoring calls to dependencies, smart detection, and
the powerful Log Analytics query language.
1. Select Application Insights in the Azure control panel for your web app.

Choose to create a new resource, unless you already set up an Application Insights resource for this app
by another route.
2. Instrument your web app after Application Insights has been installed.
Enable client side monitoring for page view and user telemetry.
Select Settings > Application Settings
Under App Settings, add a new key value pair:
Key: APPINSIGHTS_JAVASCRIPT_ENABLED

Value: true

Save the settings and Restart your app.


3. Monitor your app. Explore the data.
Later, you can build the app with Application Insights if you want.
How do I remove Application Insights, or switch to sending to another resource?
In Azure, open the web app control blade, and under Development Tools, open Extensions. Delete the
Application Insights extension. Then under Monitoring, choose Application Insights and create or select the
resource you want.

Build the app with Application Insights


Application Insights can provide more detailed telemetry by installing an SDK into your app. In particular, you can
collect trace logs, write custom telemetry, and get more detailed exception reports.
1. In Visual Studio (2013 update 2 or later), configure Application Insights for your project.
Right-click the web project, and select Add > Application Insights or Configure Application Insights.

If you're asked to sign in, use the credentials for your Azure account.
The operation has two effects:
a. Creates an Application Insights resource in Azure, where telemetry is stored, analyzed and displayed.
b. Adds the Application Insights NuGet package to your code (if it isn't there already), and configures it to
send telemetry to the Azure resource.
2. Test the telemetry by running the app in your development machine (F5).
3. Publish the app to Azure in the usual way.
How do I switch to sending to a different Application Insights resource?
In Visual Studio, right-click the project, choose Configure Application Insights and choose the resource you
want. You get the option to create a new resource. Rebuild and redeploy.

Explore the data


1. On the Application Insights blade of your web app control panel, you see Live Metrics, which shows requests
and failures within a second or two of them occurring. It's very useful display when you're republishing your
app - you can see any problems immediately.
2. Click through to the full Application Insights resource.

You can also go there either directly from Azure resource navigation.
3. Click through any chart to get more detail:
You can customize metrics blades.
4. Click through further to see individual events and their properties:

Notice the "..." link to open all properties.


You can customize searches.
For more powerful searches over your telemetry, use the Log Analytics query language.
More telemetry
Web page load data
Custom telemetry

Video

Next steps
Run the profiler on your live app.
Azure Functions - monitor Azure Functions with Application Insights
Enable Azure diagnostics to be sent to Application Insights.
Monitor service health metrics to make sure your service is available and responsive.
Receive alert notifications whenever operational events happen or metrics cross a threshold.
Use Application Insights for JavaScript apps and web pages to get client telemetry from the browsers that visit
a web page.
Set up Availability web tests to be alerted if your site is down.
Application Insights for Azure Cloud Services
1/3/2018 • 9 min to read • Edit Online

Microsoft Azure Cloud service apps can be monitored by Application Insights for availability, performance,
failures, and usage by combining data from Application Insights' SDKs with Azure Diagnostics data from your
Cloud Services. With the feedback you get about the performance and effectiveness of your app in the wild, you
can make informed choices about the direction of the design in each development lifecycle.

Before you start


You'll need:
A subscription with Microsoft Azure. Sign in with a Microsoft account, which you might have for Windows,
XBox Live, or other Microsoft cloud services.
Microsoft Azure tools 2.9 or later
Developer Analytics Tools 7.10 or later

Quick start
The quickest and easiest way to monitor your cloud service with Application Insights is to choose that option
when you publish your service to Azure.
This option instruments your app at run time, giving you all the telemetry you need to monitor requests,
exceptions, and dependencies in your web role, as well as performance counters from your worker roles. Any
diagnostic traces generated by your app are also sent to Application Insights.
If that's all you need, you're done! Next steps are viewing metrics from your app, querying your data with
Analytics, and maybe setting up a dashboard. You might want to set up availability tests and add code to your web
pages to monitor performance in the browser.
But you can also get more options:
Send data from different components and build configurations to separate resources.
Add custom telemetry from your app.
If those options are of interest to you, read on.

Sample Application instrumented with Application Insights


Take a look at this sample application in which Application Insights is added to a cloud service with two worker
roles hosted in Azure.
What follows tells you how to adapt your own cloud service project in the same way.

Plan resources and resource groups


The telemetry from your app is stored, analyzed and displayed in an Azure resource of type Application Insights.
Each resource belongs to a resource group. Resource groups are used for managing costs, for granting access to
team members, and to deploy updates in a single coordinated transaction. For example, you could write a script to
deploy an Azure Cloud Service and its Application Insights monitoring resources all in one operation.
Resources for components
The recommended scheme is to create a separate resource for each component of your application - that is, each
web role and worker role. You can analyze each component separately, but can create a dashboard that brings
together the key charts from all the components, so that you can compare and monitor them together.
An alternative scheme is to send the telemetry from more than one role to the same resource, but add a
dimension property to each telemetry item that identifies its source role. In this scheme, metric charts such as
exceptions normally show an aggregation of the counts from the different roles, but you can segment the chart by
the role identifier when required. Searches can also be filtered by the same dimension. This alternative makes it a
bit easier to view everything at the same time, but could also lead to some confusion between the roles.
Browser telemetry is usually included in the same resource as its server-side web role.
Put the Application Insights resources for the different components in one resource group. This makes it easy to
manage them together.
Separating development, test, and production
If you are developing custom events for your next feature while the previous version is live, you want to send the
development telemetry to a separate Application Insights resource. Otherwise it will be hard to find your test
telemetry among all the traffic from the live site.
To avoid this situation, create separate resources for each build configuration or 'stamp' (development, test,
production, ...) of your system. Put the resources for each build configuration in a separate resource group.
To send the telemetry to the appropriate resources, you can set up the Application Insights SDK so that it picks up
a different instrumentation key depending on the build configuration.

Create an Application Insights resource for each role


If you've decided to create a separate resource for each role - and perhaps a separate set for each build
configuration - then it's easiest to create them all in the Application Insights portal. (If you create resources a lot,
you can automate the process.
1. In the Azure portal, create a new Application Insights resource. For application type, choose [Link] app.

2. Note that each resource is identified by an Instrumentation Key. You might need this later if you want to
manually configure or verify the configuration of the SDK.
Set up Azure Diagnostics for each role
Set this option to monitor your app with Application Insights. For web roles, this provides performance
monitoring, alerts, and diagnostics, as well as usage analysis. For other roles, you can search and monitor Azure
diagnostics such as restart, performance counters, and calls to [Link].
1. In Visual Studio Solution Explorer, under <YourCloudService>, Roles, open the properties of each role.
2. In Configuration, set Send diagnostics data to Application Insights and select the appropriate Application
Insights resource that you created earlier.
If you have decided to use a separate Application Insights resource for each build configuration, select the
configuration first.

This has the effect of inserting your Application Insights instrumentation keys into the files named
ServiceConfiguration.*.cscfg . (Sample code).

If you want to vary the level of diagnostic information sent to Application Insights, you can do so by editing the
.cscfg files directly.

Install the SDK in each project


This option adds the ability to add custom business telemetry to any role, for a closer analysis of how your
application is used and performs.
In Visual Studio, configure the Application Insights SDK for each cloud app project.
1. Web roles: Right-click the project and choose Configure Application Insights or Add > Application
Insights telemetry.
2. Worker roles:
Right-click the project and select Manage Nuget Packages.
Add Application Insights for Windows Servers.

3. Configure the SDK to send data to the Application Insights resource.


In a suitable startup function, set the instrumentation key from the configuration setting in the .cscfg file:

[Link] =
[Link]("APPINSIGHTS_INSTRUMENTATIONKEY");

Do this for each role in your application. See the examples:


Web role
Worker role
For web pages
4. Set the [Link] file to be copied always to the output directory.
(In the .config file, you'll see messages asking you to place the instrumentation key there. However, for
cloud applications it's better to set it from the .cscfg file. This ensures that the role is correctly identified in
the portal.)
Run and publish the app
Run your app, and sign into Azure. Open the Application Insights resources you created, and you'll see individual
data points appearing in Search, and aggregated data in Metric Explorer.
Add more telemetry - see the sections below - and then publish your app to get live diagnostic and usage
feedback.
No data?
Open the Search tile, to see individual events.
Use the application, opening different pages so that it generates some telemetry.
Wait a few seconds and click Refresh.
See Troubleshooting.
View Azure Diagnostic events
Where to find the Azure Diagnostics information in Application Insights:
Performance counters are displayed as custom metrics.
Windows event logs are shown as traces and custom events.
Application logs, ETW logs, and any diagnostics infrastructure logs appear as traces.
To see performance counters and counts of events, open Metrics Explorer and add a new chart:

Use Search or an Analytics query to search across the various trace logs sent by Azure Diagnostics. For example,
suppose you have an unhandled exception which caused a Role to crash and recycle. That information would
show up in the Application channel of Windows Event Log. You can use Search to look at the Windows Event Log
error and get the full stack trace for the exception. That will help you find the root cause of the issue.
More telemetry
The sections below show how to get additional telemetry from different aspects of your application.

Track Requests from Worker roles


In web roles, the requests module automatically collects data about HTTP requests. See the sample MVCWebRole
for examples of how you can override the default collection behavior.
You can capture the performance of calls to worker roles by tracking them in the same way as HTTP requests. In
Application Insights, the Request telemetry type measures a unit of named server-side work that can be timed and
can independently succeed or fail. While HTTP requests are captured automatically by the SDK, you can insert your
own code to track requests to worker roles.
See the two sample worker roles instrumented to report requests: WorkerRoleA and WorkerRoleB

Exceptions
See Monitoring Exceptions in Application Insights for information on how you can collect unhandled exceptions
from different web application types.
The sample web role has MVC5 and Web API 2 controllers. The unhandled exceptions from the two are captured
with the following handlers:
AiHandleErrorAttribute set up here for MVC5 controllers
AiWebApiExceptionLogger set up here for Web API 2 controllers
For worker roles, there are two ways to track exceptions:
TrackException(ex)
If you have added the Application Insights trace listener NuGet package, you can use
[Link] to log exceptions. Code example.

Performance Counters
The following counters are collected by default:

* \Process(??APP_WIN32_PROC??)\% Processor Time


* \Memory\Available Bytes
* \.NET CLR Exceptions(??APP_CLR_PROC??)\# of Exceps Thrown / sec
* \Process(??APP_WIN32_PROC??)\Private Bytes
* \Process(??APP_WIN32_PROC??)\IO Data Bytes/sec
* \Processor(_Total)\% Processor Time

For web roles, these counters are also collected:

* \[Link] Applications(??APP_W3SVC_PROC??)\Requests/Sec
* \[Link] Applications(??APP_W3SVC_PROC??)\Request Execution Time
* \[Link] Applications(??APP_W3SVC_PROC??)\Requests In Application Queue

You can specify additional custom or other windows performance counters by editing [Link]
as in this example.
Correlated Telemetry for Worker Roles
It is a rich diagnostic experience, when you can see what led to a failed or high latency request. With web roles, the
SDK automatically sets up correlation between related telemetry. For worker roles, you can use a custom
telemetry initializer to set a common [Link] context attribute for all the telemetry to achieve this. This allows
you to see whether the latency/failure issue was caused due to a dependency or your code, at a glance!
Here's how:
Set the correlation Id into a CallContext as shown here. In this case, we are using the Request ID as the
correlation id
Add a custom TelemetryInitializer implementation, to set the [Link] to the correlationId set above.
There's an example here: ItemCorrelationTelemetryInitializer
Add the custom telemetry initializer. You could do that in the [Link] file, or in code as
shown here
That's it! The portal experience is already wired up to help you see all associated telemetry at a glance:
Client telemetry
Add the JavaScript SDK to your web pages to get browser-based telemetry such as page view counts, page load
times, script exceptions, and to let you write custom telemetry in your page scripts.

Availability tests
Set up web tests to make sure your application stays live and responsive.

Display everything together


To get an overall picture of your system, you can bring the key monitoring charts together on one dashboard. For
example, you could pin the request and failure counts of each role.
If your system uses other Azure services such as Stream Analytics, include their monitoring charts as well.
If you have a client mobile app, insert some code to send custom events on key user operations, and create a
HockeyApp bridge. Create queries in Analytics to display the event counts, and pin them to the dashboard.
Example
The example monitors a service that has a web role and two worker roles.

Exception "method not found" on running in Azure Cloud Services


Did you build for .NET 4.6? 4.6 is not automatically supported in Azure Cloud Services roles. Install 4.6 on each
role before running your app.

Video

Next steps
Configure sending Azure Diagnostics to Application Insights
Automate creation of Application Insights resources
Automate Azure diagnostics
Azure Functions
Set up Application Insights for your [Link]
website
11/1/2017 • 6 min to read • Edit Online

This procedure configures your [Link] web app to send telemetry to the Azure Application Insights
service. It works for [Link] apps that are hosted either in your own IIS server or in the Cloud. You get
charts and a powerful query language that help you understand the performance of your app and how
people are using it, plus automatic alerts on failures or performance issues. Many developers find these
features great as they are, but you can also extend and customize the telemetry if you need to.
Setup takes just a few clicks in Visual Studio. You have the option to avoid charges by limiting the volume
of telemetry. This allows you to experiment and debug, or to monitor a site with not many users. When you
decide you want to go ahead and monitor your production site, it's easy to raise the limit later.

Before you start


You need:
Visual Studio 2013 update 3 or later. Later is better.
A subscription to Microsoft Azure. If your team or organization has an Azure subscription, the owner can
add you to it, by using your Microsoft account.
There are alternative topics to look at if you are interested in:
Instrumenting a web app at runtime
Azure Cloud Services

Step 1: Add the Application Insights SDK


Right-click your web app project in Solution Explorer, and choose Add > Application Insights
Telemetry... or Configure Application Insights.

(In Visual Studio 2015, there's also an option to add Application Insights in the New Project dialog.)
Continue to the Application Insights configuration page:
a. Select the account and subscription that you use to access Azure.
b. Select the resource in Azure where you want to see the data from your app. Usually:
Use a single resource for different components of a single application.
Create separate resources for unrelated applications.
If you want to set the resource group or the location where your data is stored, click Configure settings.
Resource groups are used to control access to data. For example, if you have several apps that form part of
the same system, you might put their Application Insights data in the same resource group.
c. Set a cap at the free data volume limit, to avoid charges. Application Insights is free up to a certain
volume of telemetry. After the resource is created, you can change your selection in the portal by opening
Features + pricing > Data volume management > Daily volume cap.
d. Click Register to go ahead and configure Application Insights for your web app. Telemetry will be sent
to the Azure portal, both during debugging and after you have published your app.
e. If you don't want to send telemetry to the portal while you're debugging, just add the Application
Insights SDK to your app but don't configure a resource in the portal. You will be able to see telemetry in
Visual Studio while you are debugging. Later, you can return to this configuration page, or you could wait
until after you have deployed your app and switch on telemetry at run time.
Step 2: Run your app
Run your app with F5. Open different pages to generate some telemetry.
In Visual Studio, you see a count of the events that have been logged.

Step 3: See your telemetry


You can see your telemetry either in Visual Studio or in the Application Insights web portal. Search
telemetry in Visual Studio to help you debug your app. Monitor performance and usage in the web portal
when your system is live.
See your telemetry in Visual Studio
In Visual Studio, open the Application Insights window. Either click the Application Insights button, or
right-click your project in Solution Explorer, select Application Insights, and then click Search Live
Telemetry.
In the Visual Studio Application Insights Search window, see the Data from Debug session view for
telemetry generated in the server side of your app. Experiment with the filters, and click any event to see
more detail.

NOTE
If you don't see any data, make sure the time range is correct, and click the Search icon.

Learn more about Application Insights tools in Visual Studio.


See telemetry in web portal
You can also see telemetry in the Application Insights web portal (unless you chose to install only the SDK).
The portal has more charts, analytic tools, and cross-component views than Visual Studio. The portal also
provides alerts.
Open your Application Insights resource. Either sign in to the Azure portal and find it there, or right-click
the project in Visual Studio, and let it take you there.

NOTE
If you get an access error: Do you have more than one set of Microsoft credentials, and are you signed in with the
wrong set? In the portal, sign out and sign in again.

The portal opens on a view of the telemetry from your app.

In the portal, click any tile or chart to see more detail.


Learn more about using Application Insights in the Azure portal.
Step 4: Publish your app
Publish your app to your IIS server or to Azure. Watch Live Metrics Stream to make sure everything is
running smoothly.
Your telemetry builds up in the Application Insights portal, where you can monitor metrics, search your
telemetry, and set up dashboards. You can also use the powerful Log Analytics query language to analyze
usage and performance, or to find specific events.
You can also continue to analyze your telemetry in Visual Studio, with tools such as diagnostic search and
trends.

NOTE
If your app sends enough telemetry to approach the throttling limits, automatic sampling switches on. Sampling
reduces the quantity of telemetry sent from your app, while preserving correlated data for diagnostic purposes.

You're all set


Congratulations! You installed the Application Insights package in your app, and configured it to send
telemetry to the Application Insights service on Azure.

The Azure resource that receives your app's telemetry is identified by an instrumentation key. You'll find
this key in the [Link] file.

Upgrade to future SDK versions


To upgrade to a new release of the SDK, open the NuGet package manager again, and filter on installed
packages. Select [Link], and choose Upgrade.
If you made any customizations to [Link], save a copy of it before you upgrade. Then,
merge your changes into the new version.

Video

Next steps
More telemetry
Browser and page load data - Insert a code snippet in your web pages.
Get more detailed dependency and exception monitoring - Install Status Monitor on your server.
Code custom events to count, time, or measure user actions.
Get log data - Correlate log data with your telemetry.
Analysis
Working with Application Insights in Visual Studio
Includes information about debugging with telemetry, diagnostic search, and drill through to code.
Working with the Application Insights portal
Includes information about dashboards, powerful diagnostic and analytic tools, alerts, a live dependency
map of your application, and telemetry export.
Analytics - The powerful query language.
Alerts
Availability tests: Create tests to make sure your site is visible on the web.
Smart diagnostics: These tests run automatically, so you don't have to do anything to set them up. They
tell you if your app has an unusual rate of failed requests.
Metric alerts: Set these to warn you if a metric crosses a threshold. You can set them on custom metrics
that you code into your app.
Automation
Automate creating an Application Insights resource
Instrument web apps at runtime with Application
Insights
11/1/2017 • 7 min to read • Edit Online

You can instrument a live web app with Azure Application Insights, without having to modify or redeploy your
code. If your apps are hosted by an on-premises IIS server, install Status Monitor. If they're Azure web apps or
run in an Azure VM, you can switch on Application Insights monitoring from the Azure control panel. (There are
also separate articles about instrumenting live J2EE web apps and Azure Cloud Services.) You need a Microsoft
Azure subscription.

You have a choice of three routes to apply Application Insights to your .NET web applications:
Build time: Add the Application Insights SDK to your web app code.
Run time: Instrument your web app on the server, as described below, without rebuilding and redeploying
the code.
Both: Build the SDK into your web app code, and also apply the run-time extensions. Get the best of both
options.
Here's a summary of what you get by each route:

BUILD TIME RUN TIME

Requests & exceptions Yes Yes

More detailed exceptions Yes

Dependency diagnostics On .NET 4.6+, but less detail Yes, full detail: result codes, SQL
command text, HTTP verb

System performance counters Yes Yes

API for custom telemetry Yes No

Trace log integration Yes No

Page view & user data Yes No


BUILD TIME RUN TIME

Need to rebuild code Yes No

Monitor a live Azure web app


If your application is running as an Azure web service, here's how to switch on monitoring:
Select Application Insights on the app's control panel in Azure.

When the Application Insights summary page opens, click the link at the bottom to open the full
Application Insights resource.

Monitoring Cloud and VM apps.


Enable client-side monitoring in Azure
If you have enabled Application Insights in Azure, you can add page view and user telemetry.
1. Select Settings > Application Settings
2. Under App Settings, add a new key value pair:
Key: APPINSIGHTS_JAVASCRIPT_ENABLED

Value: true

3. Save the settings and Restart your app.


The Application Insights JavaScript SDK is now injected into each web page.

Monitor a live IIS web app


If your app is hosted on an IIS server, enable Application Insights by using Status Monitor.
1. On your IIS web server, sign in with administrator credentials.
2. If Application Insights Status Monitor is not already installed, download and run the Status Monitor installer
(or run Web Platform Installer and search in it for Application Insights Status Monitor).
3. In Status Monitor, select the installed web application or website that you want to monitor. Sign in with
your Azure credentials.
Configure the resource where you want to see the results in the Application Insights portal. (Normally,
it's best to create a new resource. Select an existing resource if you already have web tests or client
monitoring for this app.)

4. Restart IIS.

Your web service is interrupted for a short while.

Customize monitoring options


Enabling Application Insights adds DLLs and [Link] to your web app. You can edit the
.config file to change some of the options.

When you re-publish your app, re-enable Application Insights


Before you re-publish your app, consider adding Application Insights to the code in Visual Studio. You'll get
more detailed telemetry and the ability to write custom telemetry.
If you want to re-publish without adding Application Insights to the code, be aware that the deployment
process may delete the DLLs and [Link] from the published web site. Therefore:
1. If you edited [Link], take a copy of it before you re-publish your app.
2. Republish your app.
3. Re-enable Application Insights monitoring. (Use the appropriate method: either the Azure web app control
panel, or the Status Monitor on an IIS host.)
4. Reinstate any edits you performed on the .config file.

Troubleshooting runtime configuration of Application Insights


Can't connect? No telemetry?
Open the necessary outgoing ports in your server's firewall to allow Status Monitor to work.
Open Status Monitor and select your application on left pane. Check if there are any diagnostics
messages for this application in the "Configuration notifications" section:

On the server, if you see a message about "insufficient permissions", try the following:
In IIS Manager, select your application pool, open Advanced Settings, and under Process Model
note the identity.
In Computer management control panel, add this identity to the Performance Monitor Users group.
If you have MMA/SCOM (Systems Center Operations Manager) installed on your server, some versions can
conflict. Uninstall both SCOM and Status Monitor, and re-install the latest versions.
See Troubleshooting.

System Requirements
OS support for Application Insights Status Monitor on Server:
Windows Server 2008
Windows Server 2008 R2
Windows Server 2012
Windows server 2012 R2
Windows Server 2016
with latest SP and .NET Framework 4.5
On the client side: Windows 7, 8, 8.1 and 10, again with .NET Framework 4.5
IIS support is: IIS 7, 7.5, 8, 8.5 (IIS is required)

Automation with PowerShell


You can start and stop monitoring by using PowerShell on your IIS server.
First import the Application Insights module:
Import-Module 'C:\Program Files\Microsoft Application Insights\Status
Monitor\PowerShell\[Link]'

Find out which apps are being monitored:


Get-ApplicationInsightsMonitoringStatus [-Name appName]

-Name (Optional) The name of a web app.


Displays the Application Insights monitoring status for each web app (or the named app) in this IIS server.
Returns ApplicationInsightsApplication for each app:
SdkState==EnabledAfterDeployment : App is being monitored, and was instrumented at run time, either
by the Status Monitor tool, or by Start-ApplicationInsightsMonitoring .
SdkState==Disabled : The app is not instrumented for Application Insights. Either it was never
instrumented, or run-time monitoring was disabled with the Status Monitor tool or with
Stop-ApplicationInsightsMonitoring .
SdkState==EnabledByCodeInstrumentation : The app was instrumented by adding the SDK to the source
code. Its SDK cannot be updated or stopped.
SdkVersion shows the version in use for monitoring this app.
LatestAvailableSdkVersion shows the version currently available on the NuGet gallery. To upgrade
the app to this version, use Update-ApplicationInsightsMonitoring .

Start-ApplicationInsightsMonitoring -Name appName -InstrumentationKey 00000000-000-000-000-0000000

-Name The name of the app in IIS


-InstrumentationKey The ikey of the Application Insights resource where you want the results to be
displayed.
This cmdlet only affects apps that are not already instrumented - that is, SdkState==NotInstrumented.
The cmdlet does not affect an app that is already instrumented. It does not matter whether the app was
instrumented at build time by adding the SDK to the code, or at run time by a previous use of this
cmdlet.
The SDK version used to instrument the app is the version that was most recently downloaded to this
server.
To download the latest version, use Update-ApplicationInsightsVersion.
Returns ApplicationInsightsApplication on success. If it fails, it logs a trace to stderr.

Name : Default Web Site/WebApp1


InstrumentationKey : 00000000-0000-0000-0000-000000000000
ProfilerState : ApplicationInsights
SdkState : EnabledAfterDeployment
SdkVersion : 1.2.1
LatestAvailableSdkVersion : 1.2.3

Stop-ApplicationInsightsMonitoring [-Name appName | -All]


-Name The name of an app in IIS
-All Stops monitoring all apps in this IIS server for which SdkState==EnabledAfterDeployment
Stops monitoring the specified apps and removes instrumentation. It only works for apps that have been
instrumented at run-time using the Status Monitoring tool or Start-ApplicationInsightsApplication. (
SdkState==EnabledAfterDeployment )
Returns ApplicationInsightsApplication.
Update-ApplicationInsightsMonitoring -Name appName [-InstrumentationKey "0000000-0000-000-000-0000" ]
-Name : The name of a web app in IIS.
-InstrumentationKey (Optional.) Use this to change the resource to which the app's telemetry is sent.
This cmdlet:
Upgrades the named app to the version of the SDK most recently downloaded to this machine. (Only
works if SdkState==EnabledAfterDeployment )
If you provide an instrumentation key, the named app is reconfigured to send telemetry to the
resource with that key. (Works if SdkState != Disabled )

Update-ApplicationInsightsVersion

Downloads the latest Application Insights SDK to the server.

Questions about Status Monitor


What is Status Monitor?
A desktop application that you install in your IIS web server. It helps you instrument and configure web apps.
When do I use Status Monitor?
To instrument any web app that is running on your IIS server - even if it is already running.
To enable additional telemetry for web apps that have been built with the Application Insights SDK at
compile time.
Can I close it after it runs?
Yes. After it has instrumented the websites you select, you can close it.
It doesn't collect telemetry by itself. It just configures the web apps and sets some permissions.
What does Status Monitor do?
When you select a web app for Status Monitor to instrument:
Downloads and places the Application Insights assemblies and .config file in the web app's binaries folder.
Modifies [Link] to add the Application Insights HTTP tracking module.
Enables CLR profiling to collect dependency calls.
Do I need to run Status Monitor whenever I update the app?
Not if you redeploy incrementally.
If you select the 'delete existing files' option in the publish process, you would need to re-run Status Monitor to
configure Application Insights.
What telemetry is collected?
For applications that you instrument only at run-time by using Status Monitor:
HTTP requests
Calls to dependencies
Exceptions
Performance counters
For applications already instrumented at compile time:
Process counters.
Dependency calls (.NET 4.5); return values in dependency calls (.NET 4.6).
Exception stack trace values.
Learn more

Video

Next steps
View your telemetry:
Explore metrics to monitor performance and usage
Search events and logs to diagnose problems
Analytics for more advanced queries
Create dashboards
Add more telemetry:
Create web tests to make sure your site stays live.
Add web client telemetry to see exceptions from web page code and to let you insert trace calls.
Add Application Insights SDK to your code so that you can insert trace and log calls
Manually configure Application Insights for .NET
applications
11/1/2017 • 4 min to read • Edit Online

You can configure Application Insights to monitor a wide variety of applications or application roles, components,
or microservices. For web apps and services, Visual Studio offers one-step configuration. For other types of .NET
application, such as backend server roles or desktop applications, you can configure Application Insights manually.

Before you start


You need:
A subscription to Microsoft Azure. If your team or organization has an Azure subscription, the owner can add
you to it, using your Microsoft account.
Visual Studio 2013 or later.

1. Choose an Application Insights resource


The 'resource' is where your data is collected and displayed in the Azure portal. You need to decide whether to
create a new one, or share an existing one.
Part of a larger app: Use existing resource
If your web application has several components - for example, a front-end web app and one or more back-end
services - then you should send telemetry from all the components to the same resource. This will enable them to
be displayed on a single Application Map, and make it possible to trace a request from one component to another.
So, if you're already monitoring other components of this app, then just use the same resource.
Open the resource in the Azure portal.
Self-contained app: Create a new resource
If the new app is unrelated to other applications, then it should have its own resource.
Sign in to the Azure portal, and create a new Application Insights resource. Choose [Link] as the application type.
The choice of application type sets the default content of the resource blades.

2. Copy the Instrumentation Key


The key identifies the resource. You'll install it soon in the SDK, in order to direct data to the resource.

3. Install the Application Insights package in your application


Installing and configuring the Application Insights package varies depending on the platform you're working on.
1. In Visual Studio, right-click your project and choose Manage Nuget Packages.
2. Install the Application Insights package for Windows server apps,
"[Link]."

Which version?
Check Include prerelease if you want to try our latest features. The relevant documents or blogs note
whether you need a prerelease version.
Can I use other packages?
Yes. Choose "[Link]" if you only want to use the API to send your own telemetry. The
Windows Server package includes the API plus a number of other packages such as performance counter
collection and dependency monitoring.
To upgrade to future package versions
We release a new version of the SDK from time to time.
To upgrade to a new release of the package, open NuGet package manager again and filter on installed packages.
Select [Link] and choose Upgrade.
If you made any customizations to [Link], save a copy of it before you upgrade, and afterwards
merge your changes into the new version.

4. Send telemetry
If you installed only the API package:
Set the instrumentation key in code, for example in main() :
[Link] = " your key ";

Write your own telemetry using the API.


If you installed other Application Insights packages, you can, if you prefer, use the .config file to set the
instrumentation key:
Edit [Link] (which was added by the NuGet install). Insert this just before the closing tag:
<InstrumentationKey> the instrumentation key you copied </InstrumentationKey>

Make sure that the properties of [Link] in Solution Explorer are set to Build Action =
Content, Copy to Output Directory = Copy.
It's useful to set the instrumentation key in code if you want to switch the key for different build configurations. If
you set the key in code, you don't have to set it in the .config file.

Run your project


Use the F5 to run your application and try it out: open different pages to generate some telemetry.
In Visual Studio, you'll see a count of the events that have been sent.

View your telemetry


Return to the Azure portal and browse to your Application Insights resource.
Look for data in the Overview charts. At first, you'll just see one or two points. For example:

Click through any chart to see more detailed metrics. Learn more about metrics.
No data?
Use the application, opening different pages so that it generates some telemetry.
Open the Search tile, to see individual events. Sometimes it takes events a little while longer to get through the
metrics pipeline.
Wait a few seconds and click Refresh. Charts refresh themselves periodically, but you can refresh manually if
you're waiting for some data to show up.
See Troubleshooting.

Publish your app


Now deploy your application to your server or to Azure and watch the data accumulate.
When you run in debug mode, telemetry is expedited through the pipeline, so that you should see data appearing
within seconds. When you deploy your app in Release configuration, data accumulates more slowly.
No data after you publish to your server?
Open ports for outgoing traffic in your server's firewall. See this page for the list of required addresses
Trouble on your build server?
Please see this Troubleshooting item.

NOTE
If your app generates a lot of telemetry, the adaptive sampling module will automatically reduce the volume that is sent to
the portal by sending only a representative fraction of events. However, events that are related to the same request will be
selected or deselected as a group, so that you can navigate between related events. Learn about sampling.

Video

Next steps
Add more telemetry to get the full 360-degree view of your application.
Monitoring usage and performance in Windows
Desktop apps
11/1/2017 • 1 min to read • Edit Online

Azure Application Insights and HockeyApp let you monitor your deployed application for usage and performance.

IMPORTANT
We recommend HockeyApp to distribute and monitor desktop and device apps. With HockeyApp, you can manage
distribution, live testing, and user feedback, as well as monitor usage and crash reports. You can also export and query your
telemetry with Analytics.
Although telemetry can be sent to Application Insights from a desktop application, this is chiefly useful for debugging and
experimental purposes.

To send telemetry to Application Insights from a Windows application


1. In the Azure portal, create an Application Insights resource. For application type, choose [Link] app.
2. Take a copy of the Instrumentation Key. Find the key in the Essentials drop-down of the new resource you just
created.
3. In Visual Studio, edit the NuGet packages of your app project, and add
[Link]. (Or choose [Link] if you just want the
bare API, without the standard telemetry collection modules.)
4. Set the instrumentation key either in your code:
[Link] = " your key ";

or in [Link] (if you installed one of the standard telemetry packages):


<InstrumentationKey> your key </InstrumentationKey>

If you use [Link], make sure its properties in Solution Explorer are set to Build Action =
Content, Copy to Output Directory = Copy.
5. Use the API to send telemetry.
6. Run your app, and see the telemetry in the resource you created in the Azure Portal.

Example code
public partial class Form1 : Form
{
private TelemetryClient tc = new TelemetryClient();
...
private void Form1_Load(object sender, EventArgs e)
{
// Alternative to setting ikey in config file:
[Link] = "key copied from portal";

// Set session data:


[Link] = [Link];
[Link] = [Link]().ToString();
[Link] = [Link]();

// Log a page view:


[Link]("Form1");
...
}

protected override void OnClosing(CancelEventArgs e)


{
stop = true;
if (tc != null)
{
[Link](); // only for desktop apps

// Allow time for flushing:


[Link](1000);
}
[Link](e);
}

Next steps
Create a dashboard
Diagnostic Search
Explore metrics
Write Analytics queries
Application Insights for [Link] Core
11/1/2017 • 2 min to read • Edit Online

Application Insights lets you monitor your web application for availability, performance and usage. With the
feedback you get about the performance and effectiveness of your app in the wild, you can make informed choices
about the direction of the design in each development lifecycle.

You'll need a subscription with Microsoft Azure. Sign in with a Microsoft account, which you might have for
Windows, XBox Live, or other Microsoft cloud services. Your team might have an organizational subscription to
Azure: ask the owner to add you to it using your Microsoft account.

Getting started
In Visual Studio Solution Explorer, right-click your project and select Configure Application Insights, or Add
> Application Insights. Learn more.
If you don't see those menu commands, follow the manual getting Started guide. You may need to do this if
your project was created with a version of Visual Studio before 2017.

Using Application Insights


Sign into the Microsoft Azure portal, select All Resources or Application Insights, and then select the resource
you created to monitor your app.
In a separate browser window, use your app for a while. You'll see data appearing in the Application Insights
charts. (You might have to click Refresh.) There will be only a small amount of data while you're developing, but
these charts really come alive when you publish your app and have many users.
The overview page shows key performance charts: server response time, page load time, and counts of failed
requests. Click any chart to see more charts and data.
Views in the portal fall into three main categories:
Metrics Explorer shows graphs and tables of metrics and counts, such as response times, failure rates, or
metrics you create yourself with the API. Filter and segment the data by property values to get a better
understanding of your app and its users.
Search Explorer lists individual events, such as specific requests, exceptions, log traces, or events you created
yourself with the API. Filter and search in the events, and navigate among related events to investigate issues.
Analytics lets you run SQL-like queries over your telemetry, and is a powerful analytical and diagnostic tool.

Alerts
You automatically get proactive diagnostic alerts that tell you about anomalous changes in failure rates and
other metrics.
Set up availability tests to test your website continually from locations worldwide, and get emails as soon as any
test fails.
Set up metric alerts to know if metrics such as response times or exception rates go outside acceptable limits.

Video

Open source
Read and contribute to the code

Next steps
Add telemetry to your web pages to monitor page usage and performance.
Monitor dependencies to see if REST, SQL or other external resources are slowing you down.
Use the API to send your own events and metrics for a more detailed view of your app's performance and
usage.
Availability tests check your app constantly from around the world.
Application Insights for .NET console applications
1/3/2018 • 2 min to read • Edit Online

Application Insights lets you monitor your web application for availability, performance, and usage.
You need a subscription with Microsoft Azure. Sign in with a Microsoft account, which you might have for
Windows, Xbox Live, or other Microsoft cloud services. Your team might have an organizational subscription to
Azure: ask the owner to add you to it using your Microsoft account.

Getting started
In the Azure portal, create an Application Insights resource. For application type, choose [Link] app.
Take a copy of the Instrumentation Key. Find the key in the Essentials drop-down of the new resource you
created.
Install latest [Link] package.
Set the instrumentation key in your code before tracking any telemetry (or set
APPINSIGHTS_INSTRUMENTATIONKEY environment variable). After that, you should be able to manually track
telemetry and see it on the Azure portal

[Link] = " *your key* ";


var telemetryClient = new TelemetryClient();
[Link]("Hello World!");

Install latest version of [Link] package - it automatically tracks


HTTP, SQL, or some other external dependency calls.
You may initialize and configure Application Insights from the code or using [Link] file. Make
sure initialization happens as early as possible.
Using config file
By default, Application Insights SDK looks for [Link] file in the working directory when
TelemetryConfiguration is being created

TelemetryConfiguration config = [Link]; // Read [Link] file if


present

You may also specify path to the config file.

TelemetryConfiguration configuration =
[Link]("[Link]");

For more information, see configuration file reference.


You may get a full example of the config file by installing latest version of
[Link] package. Here is the minimal configuration for dependency
collection that is equivalent to the code example.
<?xml version="1.0" encoding="utf-8"?>
<ApplicationInsights xmlns="[Link]
<TelemetryInitializers>
<Add Type="[Link],
[Link]"/>
</TelemetryInitializers>
<TelemetryModules>
<Add Type="[Link],
[Link]">
<ExcludeComponentCorrelationHttpHeadersOnDomains>
<Add>[Link]</Add>
<Add>[Link]</Add>
<Add>[Link]</Add>
<Add>[Link]</Add>
<Add>localhost</Add>
<Add>[Link]</Add>
</ExcludeComponentCorrelationHttpHeadersOnDomains>
<IncludeDiagnosticSourceActivities>
<Add>[Link]</Add>
<Add>[Link]</Add>
</IncludeDiagnosticSourceActivities>
</Add>
</TelemetryModules>
<TelemetryChannel Type="[Link],
[Link]"/>
</ApplicationInsights>

Configuring telemetry collection from code


During application start-up create and configure DependencyTrackingTelemetryModule instance - it must be
singleton and must be preserved for application lifetime.

var module = new DependencyTrackingTelemetryModule();

// prevent Correlation Id to be sent to certain endpoints. You may add other domains as needed.
[Link]("[Link]");
//...

// enable known dependency tracking, note that in future versions, we will extend this list.
// please check default settings in [Link]
server/blob/develop/Src/DependencyCollector/NuGet/[Link]#L20
[Link]("[Link]");
[Link]("[Link]");
//....

// initialize the module


[Link](configuration);

Add common telemetry initializers

// stamps telemetry with correlation identifiers


[Link](new OperationCorrelationTelemetryInitializer());

// ensures proper [Link] is set for Azure RESTful API calls


[Link](new HttpDependenciesParsingTelemetryInitializer());

For .NET Framework Windows app, you may also install and initialize Performance Counter collector module as
described here
Full example
static void Main(string[] args)
{
TelemetryConfiguration configuration = [Link];

[Link] = "removed";
[Link](new OperationCorrelationTelemetryInitializer());
[Link](new HttpDependenciesParsingTelemetryInitializer());

var telemetryClient = new TelemetryClient();


using (IntitializeDependencyTracking(configuration))
{
[Link]("Hello World!");

using (var httpClient = new HttpClient())


{
// Http dependency is automatically tracked!
[Link]("[Link]
}
}

// run app...

// when application stops or you are done with dependency tracking, do not forget to dispose the module
[Link]();

[Link]();
}

static DependencyTrackingTelemetryModule IntitializeDependencyTracking(TelemetryConfiguration configuration)


{
// prevent Correlation Id to be sent to certain endpoints. You may add other domains as needed.
[Link]("[Link]");
[Link]("[Link]");
[Link]("[Link]");
[Link]("[Link]");
[Link]("localhost");
[Link]("[Link]");

// enable known dependency tracking, note that in future versions, we will extend this list.
// please check default settings in [Link]
server/blob/develop/Src/DependencyCollector/NuGet/[Link]#L20
[Link]("[Link]");
[Link]("[Link]");

// initialize the module


[Link](configuration);

return module;
}

Next steps
Monitor dependencies to see if REST, SQL, or other external resources are slowing you down.
Use the API to send your own events and metrics for a more detailed view of your app's performance and
usage.
Get started with Application Insights in a Java web
project
11/1/2017 • 9 min to read • Edit Online

Application Insights is an extensible analytics service for web developers that helps you understand the
performance and usage of your live application. Use it to detect and diagnose performance issues and
exceptions, and write code to track what users do with your app.

Application Insights supports Java apps running on Linux, Unix, or Windows.


You need:
Oracle JRE 1.6 or later, or Zulu JRE 1.6 or later
A subscription to Microsoft Azure.
If you have a web app that's already live, you could follow the alternative procedure to add the SDK at
runtime in the web server. That alternative avoids rebuilding the code, but you don't get the option to write
code to track user activity.

1. Get an Application Insights instrumentation key


1. Sign in to the Microsoft Azure portal.
2. Create an Application Insights resource. Set the application type to Java web application.

3. Find the instrumentation key of the new resource. You'll need to paste this key into your code project
shortly.

2. Add the Application Insights SDK for Java to your project


Choose the appropriate way for your project.
If you're using Eclipse to create a Maven or Dynamic Web project ...
Use the Application Insights SDK for Java plug-in.
If you're using Maven...
If your project is already set up to use Maven for build, merge the following code to your [Link] file.
Then, refresh the project dependencies to get the binaries downloaded.

<repositories>
<repository>
<id>central</id>
<name>Central</name>
<url>[Link]
</repository>
</repositories>

<dependencies>
<dependency>
<groupId>[Link]</groupId>
<artifactId>applicationinsights-web</artifactId>
<!-- or applicationinsights-core for bare API -->
<version>[1.0,)</version>
</dependency>
</dependencies>

Build or checksum validation errors? Try using a specific version, such as: <version>1.0.n</version> . You'll
find the latest version in the SDK release notes or in our Maven artifacts.
Need to update to a new SDK? Refresh your project's dependencies.
If you're using Gradle...
If your project is already set up to use Gradle for build, merge the following code to your [Link] file.
Then refresh the project dependencies to get the binaries downloaded.
repositories {
mavenCentral()
}

dependencies {
compile group: '[Link]', name: 'applicationinsights-web', version: '1.+'
// or applicationinsights-core for bare API
}

Build or checksum validation errors? Try using a specific version, such as: version:'1.0.n' . You'll find the
latest version in the SDK release notes.
To update to a new SDK
Refresh your project's dependencies.
Otherwise ...
Manually add the SDK:
1. Download the Application Insights SDK for Java.
2. Extract the binaries from the zip file and add them to your project.
Questions...
What's the relationship between the -core and -web components in the zip?
applicationinsights-coregives you the bare API. You always need this component.
applicationinsights-web gives you metrics that track HTTP request counts and response times. You
can omit this component if you don't want this telemetry automatically collected. For example, if you
want to write your own.
To update the SDK when we publish changes
Download the latest Application Insights SDK for Java and replace the old ones.
Changes are described in the SDK release notes.

3. Add an Application Insights .xml file


Add [Link] to the resources folder in your project, or make sure it is added to your project’s
deployment class path. Copy the following XML into it.
Substitute the instrumentation key that you got from the Azure portal.
<?xml version="1.0" encoding="utf-8"?>
<ApplicationInsights xmlns="[Link]
schemaVersion="2014-05-30">

<!-- The key from the portal: -->

<InstrumentationKey>** Your instrumentation key **</InstrumentationKey>

<!-- HTTP request component (not required for bare API) -->

<TelemetryModules>
<Add
type="[Link]"/>
<Add
type="[Link]"/>
<Add
type="[Link]"/>
</TelemetryModules>

<!-- Events correlation (not required for bare API) -->


<!-- These initializers add context data to each event -->

<TelemetryInitializers>
<Add
type="[Link]"
/>
<Add
type="[Link]
r"/>
<Add
type="[Link]"/>
<Add
type="[Link]"/>
<Add
type="[Link]"/>

</TelemetryInitializers>
</ApplicationInsights>

The instrumentation key is sent along with every item of telemetry and tells Application Insights to display
it in your resource.
The HTTP Request component is optional. It automatically sends telemetry about requests and response
times to the portal.
Events correlation is an addition to the HTTP request component. It assigns an identifier to each request
received by the server, and adds this identifier as a property to every item of telemetry as the property
'[Link]'. It allows you to correlate the telemetry associated with each request by setting a filter in
diagnostic search.
The Application Insights key can be passed dynamically from the Azure portal as a system property (-
DAPPLICATION_INSIGHTS_IKEY=your_ikey). If there is no property defined, it checks for environment
variable (APPLICATION_INSIGHTS_IKEY) in Azure App Settings. If both the properties are undefined, the
default InstrumentationKey is used from [Link]. This sequence helps you to manage
different InstrumentationKeys for different environments dynamically.
Alternative ways to set the instrumentation key
Application Insights SDK looks for the key in this order:
1. System property: -DAPPLICATION_INSIGHTS_IKEY=your_ikey
2. Environment variable: APPLICATION_INSIGHTS_IKEY
3. Configuration file: [Link]
You can also set it in code:

[Link] = "...";

4. Add an HTTP filter


The last configuration step allows the HTTP request component to log each web request. (Not required if you
just want the bare API.)
Locate and open the [Link] file in your project, and merge the following code under the web-app node,
where your application filters are configured.
To get the most accurate results, the filter should be mapped before all other filters.

<filter>
<filter-name>ApplicationInsightsWebFilter</filter-name>
<filter-class>
[Link]
</filter-class>
</filter>
<filter-mapping>
<filter-name>ApplicationInsightsWebFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>

If you're using Spring Web MVC 3.1 or later


Edit these elements in *-[Link] to include the Application Insights package:

<context:component-scan base-package=" [Link],


[Link]"/>

<mvc:interceptors>
<mvc:interceptor>
<mvc:mapping path="/**"/>
<bean
class="[Link]" />
</mvc:interceptor>
</mvc:interceptors>

If you're using Struts 2


Add this item to the Struts configuration file (usually named [Link] or [Link]):

<interceptors>
<interceptor name="ApplicationInsightsRequestNameInterceptor"
class="[Link]" />
</interceptors>
<default-interceptor-ref name="ApplicationInsightsRequestNameInterceptor" />

(If you have interceptors defined in a default stack, the interceptor can simply be added to that stack.)

5. Run your application


Either run it in debug mode on your development machine, or publish to your server.

6. View your telemetry in Application Insights


Return to your Application Insights resource in Microsoft Azure portal.
HTTP requests data appears on the overview blade. (If it isn't there, wait a few seconds and then click Refresh.)

Learn more about metrics.


Click through any chart to see more detailed aggregated metrics.

Application Insights assumes the format of HTTP requests for MVC applications is: VERB controller/action .
For example, GET Home/Product/f9anuh81 , GET Home/Product/2dffwrf5 and GET Home/Product/sdf96vws are
grouped into GET Home/Product . This grouping enables meaningful aggregations of requests, such as
number of requests and average execution time for requests.

Instance data
Click through a specific request type to see individual instances.
Two kinds of data are displayed in Application Insights: aggregated data, stored and displayed as averages,
counts, and sums; and instance data - individual reports of HTTP requests, exceptions, page views, or custom
events.
When viewing the properties of a request, you can see the telemetry events associated with it such as requests
and exceptions.
Analytics: Powerful query language
As you accumulate more data, you can run queries both to aggregate data and to find individual instances.
Analytics is a powerful tool for both for understanding performance and usage, and for diagnostic purposes.

7. Install your app on the server


Now publish your app to the server, let people use it, and watch the telemetry show up on the portal.
Make sure your firewall allows your application to send telemetry to these ports:
[Link]
[Link]
If outgoing traffic must be routed through a firewall, define system properties [Link] and
[Link] .

On Windows servers, install:


Microsoft Visual C++ Redistributable
(This component enables performance counters.)

Exceptions and request failures


Unhandled exceptions are automatically collected:

To collect data on other exceptions, you have two options:


Insert calls to trackException() in your code.
Install the Java Agent on your server. You specify the methods you want to watch.

Monitor method calls and external dependencies


Install the Java Agent to log specified internal methods and calls made through JDBC, with timing data.

Performance counters
Open Settings, Servers, to see a range of performance counters.
Customize performance counter collection
To disable collection of the standard set of performance counters, add the following code under the root node
of the [Link] file:

<PerformanceCounters>
<UseBuiltIn>False</UseBuiltIn>
</PerformanceCounters>

Collect additional performance counters


You can specify additional performance counters to be collected.
JMX counters (exposed by the Java Virtual Machine)

<PerformanceCounters>
<Jmx>
<Add objectName="[Link]:type=ClassLoading" attribute="TotalLoadedClassCount"
displayName="Loaded Class Count"/>
<Add objectName="[Link]:type=Memory" attribute="[Link]" displayName="Heap Memory
Usage-used" type="composite"/>
</Jmx>
</PerformanceCounters>

displayName – The name displayed in the Application Insights portal.


objectName – The JMX object name.
attribute – The attribute of the JMX object name to fetch
type (optional) - The type of JMX object’s attribute:
Default: a simple type such as int or long.
composite : the perf counter data is in the format of '[Link]'
tabular : the perf counter data is in the format of a table row
Windows performance counters
Each Windows performance counter is a member of a category (in the same way that a field is a member of a
class). Categories can either be global, or can have numbered or named instances.

<PerformanceCounters>
<Windows>
<Add displayName="Process User Time" categoryName="Process" counterName="%User Time"
instanceName="__SELF__" />
<Add displayName="Bytes Printed per Second" categoryName="Print Queue" counterName="Bytes
Printed/sec" instanceName="Fax" />
</Windows>
</PerformanceCounters>

displayName – The name displayed in the Application Insights portal.


categoryName – The performance counter category (performance object) with which this performance
counter is associated.
counterName – The name of the performance counter.
instanceName – The name of the performance counter category instance, or an empty string (""), if the
category contains a single instance. If the categoryName is Process, and the performance counter you'd like
to collect is from the current JVM process on which your app is running, specify "__SELF__" .

Your performance counters are visible as custom metrics in Metrics Explorer.

Unix performance counters


Install collectd with the Application Insights plugin to get a wide variety of system and network data.

Get user and session data


OK, you're sending telemetry from your web server. Now to get the full 360-degree view of your application,
you can add more monitoring:
Add telemetry to your web pages to monitor page views and user metrics.
Set up web tests to make sure your application stays live and responsive.
Capture log traces
You can use Application Insights to slice and dice logs from Log4J, Logback, or other logging frameworks. You
can correlate the logs with HTTP requests and other telemetry. Learn how.

Send your own telemetry


Now that you've installed the SDK, you can use the API to send your own telemetry.
Track custom events and metrics to learn what users are doing with your application.
Search events and logs to help diagnose problems.

Availability web tests


Application Insights can test your website at regular intervals to check that it's up and responding well. To set
up, click Web tests.

You'll get charts of response times, plus email notifications if your site goes down.

Learn more about availability web tests.

Questions? Problems?
Troubleshooting Java

Video
Next steps
Monitor dependency calls
Monitor Unix performance counters
Add monitoring to your web pages to monitor page load times, AJAX calls, browser exceptions.
Write custom telemetry to track usage in the browser or at the server.
Create dashboards to bring together the key charts for monitoring your system.
Use Analytics for powerful queries over telemetry from your app
For more information, visit Azure for Java developers.
Application Insights for Java web apps that are
already live
11/1/2017 • 3 min to read • Edit Online

If you have a web application that is already running on your J2EE server, you can start monitoring it with
Application Insights without the need to make code changes or recompile your project. With this option, you get
information about HTTP requests sent to your server, unhandled exceptions, and performance counters.
You'll need a subscription to Microsoft Azure.

NOTE
The procedure on this page adds the SDK to your web app at runtime. This runtime instrumentation is useful if you don't
want to update or rebuild your source code. But if you can, we recommend you add the SDK to the source code instead.
That gives you more options such as writing code to track user activity.

1. Get an Application Insights instrumentation key


1. Sign in to the Microsoft Azure portal
2. Create a new Application Insights resource and set the application type to Java web application.

The resource is created in a few seconds.


3. Open the new resource and get its instrumentation key. You'll need to paste this key into your code project
shortly.
2. Download the SDK
1. Download the Application Insights SDK for Java.
2. On your server, extract the SDK contents to the directory from which your project binaries are loaded. If you’re
using Tomcat, this directory would typically be under webapps/<your_app_name>/WEB-INF/lib
Note that you need to repeat this on each server instance, and for each app.

3. Add an Application Insights xml file


Create [Link] in the folder in which you added the SDK. Put into it the following XML.
Substitute the instrumentation key that you got from the Azure portal.
<?xml version="1.0" encoding="utf-8"?>
<ApplicationInsights xmlns="[Link]
schemaVersion="2014-05-30">

<!-- The key from the portal: -->

<InstrumentationKey>** Your instrumentation key **</InstrumentationKey>

<!-- HTTP request component (not required for bare API) -->

<TelemetryModules>
<Add
type="[Link]"/>
<Add
type="[Link]"/>
<Add
type="[Link]"/>
</TelemetryModules>

<!-- Events correlation (not required for bare API) -->


<!-- These initializers add context data to each event -->

<TelemetryInitializers>
<Add
type="[Link]"/>
<Add
type="[Link]"/
>
<Add
type="[Link]"/>
<Add
type="[Link]"/>
<Add
type="[Link]"/>

</TelemetryInitializers>
</ApplicationInsights>

The instrumentation key is sent along with every item of telemetry and tells Application Insights to display it in
your resource.
The HTTP Request component is optional. It automatically sends telemetry about requests and response times
to the portal.
Events correlation is an addition to the HTTP request component. It assigns an identifier to each request
received by the server, and adds this identifier as a property to every item of telemetry as the property
'[Link]'. It allows you to correlate the telemetry associated with each request by setting a filter in
diagnostic search.

4. Add an HTTP filter


Locate and open the [Link] file in your project, and merge the following snippet of code under the web-app
node, where your application filters are configured.
To get the most accurate results, the filter should be mapped before all other filters.
<filter>
<filter-name>ApplicationInsightsWebFilter</filter-name>
<filter-class>
[Link]
</filter-class>
</filter>
<filter-mapping>
<filter-name>ApplicationInsightsWebFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>

5. Check firewall exceptions


You might need to set exceptions to send outgoing data.

6. Restart your web app


7. View your telemetry in Application Insights
Return to your Application Insights resource in Microsoft Azure portal.
Telemetry about HTTP requests appears on the overview blade. (If it isn't there, wait a few seconds and then click
Refresh.)

Click through any chart to see more detailed metrics.

And when viewing the properties of a request, you can see the telemetry events associated with it such as
requests and exceptions.
Learn more about metrics.

Next steps
Add telemetry to your web pages to monitor page views and user metrics.
Set up web tests to make sure your application stays live and responsive.
Capture log traces
Search events and logs to help diagnose problems.
Monitor Docker applications in Application Insights
11/1/2017 • 3 min to read • Edit Online

Lifecycle events and performance counters from Docker containers can be charted on Application Insights. Install
the Application Insights image in a container in your host, and it will display performance counters for the host, as
well as for the other images.
With Docker, you distribute your apps in lightweight containers complete with all dependencies. They'll run on any
host machine that runs a Docker Engine.
When you run the Application Insights image on your Docker host, you get these benefits:
Lifecycle telemetry about all the containers running on the host - start, stop, and so on.
Performance counters for all the containers. CPU, memory, network usage, and more.
If you installed Application Insights SDK for Java in the apps running in the containers, all the telemetry of those
apps will have additional properties identifying the container and host machine. So for example, if you have
instances of an app running in more than one host, you can easily filter your app telemetry by host.

Set up your Application Insights resource


1. Sign into Microsoft Azure portal and open the Application Insights resource for your app; or create a new
one.
Which resource should I use? If the apps that you are running on your host were developed by someone
else, then you need to create a new Application Insights resource. This is where you view and analyze the
telemetry. (Select 'General' for the app type.)
But if you're the developer of the apps, then we hope you added Application Insights SDK to each of them. If
they're all really components of a single business application, then you might configure all of them to send
telemetry to one resource, and you'll use that same resource to display the Docker lifecycle and
performance data.
A third scenario is that you developed most of the apps, but you are using separate resources to display
their telemetry. In that case, you probably also want to create a separate resource for the Docker data.
2. Add the Docker tile: Choose Add Tile, drag the Docker tile from the gallery, and then click Done.

3. Click the Essentials drop-down and copy the Instrumentation Key. You use this to tell the SDK where to
send its telemetry.

Keep that browser window handy, as you'll come back to it soon to look at your telemetry.

Run the Application Insights monitor on your host


Now that you've got somewhere to display the telemetry, you can set up the containerized app that will collect and
send it.
1. Connect to your Docker host.
2. Edit your instrumentation key into this command, and then run it:

docker run -v /var/run/[Link]:/[Link] -d microsoft/applicationinsights ikey=000000-1111-2222-


3333-444444444

Only one Application Insights image is required per Docker host. If your application is deployed on multiple Docker
hosts, then repeat the command on every host.

Update your app


If your application is instrumented with the Application Insights SDK for Java, add the following line into the
[Link] file in your project, under the <TelemetryInitializers> element:

<Add type="[Link]"/>

This adds Docker information such as container and host id to every telemetry item sent from your app.

View your telemetry


Go back to your Application Insights resource in the Azure portal.
Click through the Docker tile.
You'll shortly see data arriving from the Docker app, especially if you have other containers running on your
Docker engine.
Here are some of the views you can get.
Perf counters by host, activity by image
Click any host or image name for more detail.
To customize the view, click any chart, the grid heading, or use Add Chart.
Learn more about metrics explorer.
Docker container events
To investigate individual events, click Search. Search and filter to find the events you want. Click any event to get
more detail.
Exceptions by container name

Docker context added to app telemetry


Request telemetry sent from the application instrumented with AI SDK, enriched with Docker context:
Processor time and available memory performance counters, enriched and grouped by Docker container name:

Q&A
What does Application Insights give me that I can't get from Docker?
Detailed breakdown of performance counters by container and image.
Integrate container and app data in one dashboard.
Export telemetry for further analysis to a database, Power BI or other dashboard.
How do I get telemetry from the app itself?
Install the Application Insights SDK in the app. Learn how for: Java web apps, Windows web apps.

Video

Next steps
Application Insights for Java
Application Insights for [Link]
Application Insights for [Link]
Monitor your [Link] services and apps with
Application Insights
12/13/2017 • 5 min to read • Edit Online

Azure Application Insights monitors your backend services and components after deployment, to help you
discover and rapidly diagnose performance and other issues. You can use Application Insights for [Link]
services that are hosted in your datacenter, in Azure VMs and web apps, and even in other public clouds.
To receive, store, and explore your monitoring data, include the SDK in your code, and then set up a
corresponding Application Insights resource in Azure. The SDK sends data to that resource for further analysis
and exploration.
The [Link] SDK can automatically monitor incoming and outgoing HTTP requests, exceptions, and some system
metrics. Beginning in version 0.20, the SDK also can monitor some common third-party packages, like MongoDB,
MySQL, and Redis. All events related to an incoming HTTP request are correlated for faster troubleshooting.
You can use the TelemetryClient API to manually instrument and monitor additional aspects of your app and
system. We describe the TelemetryClient API in more detail later in this article.

Get started
Complete the following tasks to set up monitoring for an app or service.
Prerequisites
Before you begin, make sure that you have an Azure subscription, or get a new one for free. If your organization
already has an Azure subscription, an administrator can follow these instructions to add you to it.
Set up an Application Insights resource
1. Sign in to the Azure portal.
2. Select New > Developer tools > Application Insights. The resource includes an endpoint for receiving
telemetry data, storage for this data, saved reports and dashboards, rule and alert configuration, and
more.
3. On the resource creation page, in the Application Type box, select [Link] Application. The app type
determines the default dashboards and reports that are created. (Any Application Insights resource can
collect data from any language and platform.)

Set up the [Link] SDK


Include the SDK in your app, so it can gather data.
1. Copy your resource's Instrumentation Key (also called an ikey) from the Azure portal. Application Insights
uses the ikey to map data to your Azure resource. Before the SDK can use your ikey, you must specify the
ikey in an environment variable or in your code.
2. Add the [Link] SDK library to your app's dependencies via [Link]. From the root folder of your
app, run:

npm install applicationinsights --save

3. Explicitly load the library in your code. Because the SDK injects instrumentation into many other libraries,
load the library as early as possible, even before other require statements.
At the top of your first .js file, add the following code. The setup method configures the ikey (and thus,
the Azure resource) to be used by default for all tracked items.

const appInsights = require("applicationinsights");


[Link]("<instrumentation_key>");
[Link]();

You also can provide an ikey via the environment variable APPINSIGHTS_INSTRUMENTATIONKEY, instead
of passing it manually to setup() or new [Link]() . This practice lets you keep ikeys
out of committed source code, and you can specify different ikeys for different environments.
For additional configuration options, see the following sections.
You can try the SDK without sending telemetry by setting
[Link] = true .
Monitor your app
The SDK automatically gathers telemetry about the [Link] runtime and about some common third-party
modules. Use your application to generate some of this data.
Then, in the Azure portal go to the Application Insights resource that you created earlier. In the Overview
timeline, look for your first few data points. To see more detailed data, select different components in the charts.
To view the topology that is discovered for your app, select the Application map button. Select components in
the map to see more details.

To learn more about your app, and to troubleshoot problems, in the INVESTIGATE section, select the other
views that are available.
No data?
Because the SDK batches data for submission, there might be a delay before items are displayed in the portal. If
you don't see data in your resource, try some of the following fixes:
Continue to use the application. Take more actions to generate more telemetry.
Click Refresh in the portal resource view. Charts periodically refresh on their own, but manually refreshing
forces them to refresh immediately.
Verify that required outgoing ports are open.
Use Search to look for specific events.
Check the FAQ.

SDK configuration
The SDK's configuration methods and default values are listed in the following code example.
To fully correlate events in a service, be sure to set .setAutoDependencyCorrelation(true) . With this option set, the
SDK can track context across asynchronous callbacks in [Link].

const appInsights = require("applicationinsights");


[Link]("<instrumentation_key>")
.setAutoDependencyCorrelation(true)
.setAutoCollectRequests(true)
.setAutoCollectPerformance(true)
.setAutoCollectExceptions(true)
.setAutoCollectDependencies(true)
.setAutoCollectConsole(true)
.setUseDiskRetryCaching(true)
.start();

TelemetryClient API
For a full description of the TelemetryClient API, see Application Insights API for custom events and metrics.
You can track any request, event, metric, or exception by using the Application Insights [Link] SDK. The
following code example demonstrates some of the APIs that you can use:
let appInsights = require("applicationinsights");
[Link]().start(); // assuming ikey is in env var
let client = [Link];

[Link]({name: "my custom event", properties: {customProperty: "custom property value"}});


[Link]({exception: new Error("handled exceptions can be logged with this method")});
[Link]({name: "custom metric", value: 3});
[Link]({message: "trace message"});
[Link]({target:"[Link] name:"select customers proc", data:"SELECT * FROM
Customers", duration:231, resultCode:0, success: true, dependencyTypeName: "ZSQL"});
[Link]({name:"GET /customers", url:"[Link] duration:309, resultCode:200,
success:true});

let http = require("http");


[Link]( (req, res) => {
[Link]({request: req, response: res}); // Place at the beginning of your request
handler
});

Track your dependencies


Use the following code to track your dependencies:

let appInsights = require("applicationinsights");


let client = [Link];

var success = false;


let startTime = [Link]();
// Execute dependency call here...
let duration = [Link]() - startTime;
success = true;

[Link]({dependencyTypeName: "dependency name", name: "command name", duration: duration,


success: success});

Add a custom property to all events


Use the following code to add a custom property to all events:

[Link] = {
environment: [Link].SOME_ENV_VARIABLE
};

Track HTTP GET requests


Use the following code to track HTTP GET requests:

var server = [Link]((req, res) => {


if ( [Link] === "GET" ) {
[Link]({request: req, response: res});
}
// Other work here...
[Link]();
});

Track server startup time


Use the following code to track server startup time:
let start = [Link]();
[Link]("listening", () => {
let duration = [Link]() - start;
[Link]({name: "server startup time", value: duration});
});

Next steps
Monitor your telemetry in the portal
Write Analytics queries over your telemetry
Application Insights for web pages
11/1/2017 • 7 min to read • Edit Online

Find out about the performance and usage of your web page or app. If you add Application Insights to
your page script, you get timings of page loads and AJAX calls, counts and details of browser exceptions
and AJAX failures, as well as users and session counts. All these can be segmented by page, client OS and
browser version, geo location, and other dimensions. You can set alerts on failure counts or slow page
loading. And by inserting trace calls in your JavaScript code, you can track how the different features of
your web page application are used.
Application Insights can be used with any web pages - you just add a short piece of JavaScript. If your web
service is Java or [Link], you can integrate telemetry from your server and clients.

You need a subscription to Microsoft Azure. If your team has an organizational subscription, ask the owner
to add your Microsoft Account to it. Development and small-scale use won't cost anything.

Set up Application Insights for your web page


Add the loader code snippet to your web pages, as follows.
Open or create Application Insights resource
The Application Insights resource is where data about your page's performance and usage is displayed.
Sign into Azure portal.
If you already set up monitoring for the server side of your app, you already have a resource:

If you don't have one, create it:

Questions already? More about creating a resource.


Add the SDK script to your app or web pages
In Quick Start, get the script for web pages:
Insert the script just before the </head> tag of every page you want to track. If your website has a master
page, you can put the script there. For example:
In an [Link] MVC project, you'd put it in View\Shared\_Layout.cshtml
In a SharePoint site, on the control panel, open Site Settings / Master Page.
The script contains the instrumentation key that directs the data to your Application Insights resource.
(Deeper explanation of the script.)
(If you're using a well-known web page framework, look around for Application Insights adaptors. For
example, there's an AngularJS module.)

Detailed configuration
There are several parameters you can set, though in most cases, you shouldn't need to. For example, you
can disable or limit the number of Ajax calls reported per page view (to reduce traffic). Or you can set
debug mode to have telemetry move rapidly through the pipeline without being batched.
To set these parameters, look for this line in the code snippet, and add more comma-separated items after
it:

})({
instrumentationKey: "..."
// Insert here
});

The available parameters include:


// Send telemetry immediately without batching.
// Remember to remove this when no longer required, as it
// can affect browser performance.
enableDebug: boolean,

// Don't log browser exceptions.


disableExceptionTracking: boolean,

// Don't log ajax calls.


disableAjaxTracking: boolean,

// Limit number of Ajax calls logged, to reduce traffic.


maxAjaxCallsPerView: 10, // default is 500

// Time page load up to execution of first trackPageView().


overridePageViewDuration: boolean,

// Set these dynamically for an authenticated user.


appUserId: string,
accountId: string,

Run your app


Run your web app, use it a while to generate telemetry, and wait a few seconds. You can either run it using
the F5 key on your development machine, or publish it and let users play with it.
If you want to check the telemetry that a web app is sending to Application Insights, use your browser's
debugging tools (F12 on many browsers). Data is sent to [Link].

Explore your browser performance data


Open the Browser blade to show aggregated performance data from your users' browsers.
No data yet? Click **Refresh* at the top of the page. Still nothing? See Troubleshooting.*
The Browser blade is a Metrics Explorer blade with preset filters and chart selections. You can edit the time
range, filters, and chart configuration if you want, and save the result as a favorite. Click Restore defaults
to get back to the original blade configuration.

Page load performance


At the top is a segmented chart of page load times. The total height of the chart represents the average
time to load and display pages from your app in your users' browsers. The time is measured from when
the browser sends the initial HTTP request until all synchronous load events have been processed,
including layout and running scripts. It doesn't include asynchronous tasks such as loading web parts from
AJAX calls.
The chart segments the total page load time into the standard timings defined by W3C.
Note that the network connect time is often lower than you might expect, because it's an average over all
requests from the browser to the server. Many individual requests have a connect time of 0 because there
is already an active connection to the server.
Slow loading?
Slow page loads are a major source of dissatisfaction for your users. If the chart indicates slow page loads,
it's easy to do some diagnostic research.
The chart shows the average of all page loads in your app. To see if the problem is confined to particular
pages, look further down the blade, where there's a grid segmented by page URL:

Notice the page view count and standard deviation. If the page count is very low, then the issue isn't
affecting users much. A high standard deviation (comparable to the average itself) indicates a lot of
variation between individual measurements.
Zoom in on one URL and one page view. Click any page name to see a blade of browser charts filtered
just to that URL; and then on an instance of a page view.

Click ... for a full list of properties for that event, or inspect the Ajax calls and related events. Slow Ajax
calls affect the overall page load time if they are synchronous. Related events include server requests for
the same URL (if you've set up Application Insights on your web server).
Page performance over time. Back at the Browsers blade, change the Page View Load Time grid into a
line chart to see if there were peaks at particular times:

Segment by other dimensions. Maybe your pages are slower to load on a particular browser, client OS,
or user locality? Add a new chart and experiment with the Group-by dimension.
AJAX Performance
Make sure any AJAX calls in your web pages are performing well. They are often used to fill parts of your
page asynchronously. Although the overall page might load promptly, your users could be frustrated by
staring at blank web parts, waiting for data to appear in them.
AJAX calls made from your web page are shown on the Browsers blade as dependencies.
There are summary charts in the upper part of the blade:

and detailed grids lower down:

Click any row for specific details.

NOTE
If you delete the Browsers filter on the blade, both server and AJAX dependencies are included in these charts. Click
Restore Defaults to reconfigure the filter.

To drill into failed Ajax calls scroll down to the Dependency failures grid, and then click a row to see
specific instances.
Click ... for the full telemetry for an Ajax call.
No Ajax calls reported?
Ajax calls include any HTTP/HTTPS calls made from the script of your web page. If you don't see them
reported, check that the code snippet doesn't set the disableAjaxTracking or maxAjaxCallsPerView
parameters.

Browser exceptions
On the Browsers blade, there's an exceptions summary chart, and a grid of exception types further down
the blade.
If you don't see browser exceptions reported, check that the code snippet doesn't set the
disableExceptionTracking parameter.

Inspect individual page view events


Usually page view telemetry is analyzed by Application Insights and you see only cumulative reports,
averaged over all your users. But for debugging purposes, you can also look at individual page view
events.
In the Diagnostic Search blade, set Filters to Page View.
Select any event to see more detail. In the details page, click "..." to see even more detail.

NOTE
If you use Search, notice that you have to match whole words: "Abou" and "bout" do not match "About".

You can also use the powerful Log Analytics query language to search page views.
Page view properties
Page view duration
By default, the time it takes to load the page, from client request to full load (including auxiliary
files but excluding asynchronous tasks such as Ajax calls).
If you set overridePageViewDuration in the page configuration, the interval between client
request to execution of the first trackPageView . If you moved trackPageView from its usual
position after the initialization of the script, it will reflect a different value.
If overridePageViewDuration is set and a duration argument is provided in the trackPageView()
call, then the argument value is used instead.

Custom page counts


By default, a page count occurs each time a new page loads into the client browser. But you might want to
count additional page views. For example, a page might display its content in tabs and you want to count a
page when the user switches tabs. Or JavaScript code in the page might load new content without
changing the browser's URL.
Insert a JavaScript call like this at the appropriate point in your client code:

[Link](myPageName);

The page name can contain the same characters as a URL, but anything after "#" or "?" is ignored.

Usage tracking
Want to find out what your users do with your app?
Learn about usage tracking
Learn about custom events and metrics API.

Video

Next steps
Track usage
Custom events and metrics
Build-measure-learn
Monitor a SharePoint site with Application Insights
11/1/2017 • 2 min to read • Edit Online

Azure Application Insights monitors the availability, performance and usage of your apps. Here you'll learn how to
set it up for a SharePoint site.

Create an Application Insights resource


In the Azure portal, create a new Application Insights resource. Choose [Link] as the application type.

The blade that opens is the place where you'll see performance and usage data about your app. To get back to it
next time you login to Azure, you should find a tile for it on the start screen. Alternatively click Browse to find it.

Add our script to your web pages


In Quick Start, get the script for web pages:
Insert the script just before the </head> tag of every page you want to track. If your website has a master page,
you can put the script there. For example, in an [Link] MVC project, you'd put it in View\Shared_Layout.cshtml
The script contains the instrumentation key that directs the telemetry to your Application Insights resource.
Add the code to your site pages
On the master page
If you can edit the site's master page, that will provide monitoring for every page in the site.
Check out the master page and edit it using SharePoint Designer or any other editor.
Add the code just before the tag.

Or on individual pages
To monitor a limited set of pages, add the script separately to each page.
Insert a web part and embed the code snippet in it.

View data about your app


Redeploy your app.
Return to your application blade in the Azure portal.
The first events will appear in Search.
Click Refresh after a few seconds if you're expecting more data.
From the overview blade, click Usage analytics to see to charts of users, sessions and page views:

Click any chart to see more details - for example Page Views:
Or Users:

Capturing User Id
The standard web page code snippet doesn't capture the user id from SharePoint, but you can do that with a small
modification.
1. Copy your app's instrumentation key from the Essentials drop-down in Application Insights.
2. Substitute the instrumentation key for 'XXXX' in the snippet below.
3. Embed the script in your SharePoint app instead of the snippet you get from the portal.
<SharePoint:ScriptLink ID="ScriptLink1" name="[Link]" runat="server" localizable="false" loadafterui="true" />
<SharePoint:ScriptLink ID="ScriptLink2" name="[Link]" runat="server" localizable="false"
loadafterui="true" />

<script type="text/javascript">
var personProperties;

// Ensure that the [Link] file is loaded before the custom code runs.
[Link](getUserProperties, '[Link]');

function getUserProperties() {
// Get the current client context and PeopleManager instance.
var clientContext = new [Link].get_current();
var peopleManager = new [Link](clientContext);

// Get user properties for the target user.


// To get the PersonProperties object for the current user, use the
// getMyProperties method.

personProperties = [Link]();

// Load the PersonProperties object and send the request.


[Link](personProperties);
[Link](onRequestSuccess, onRequestFail);
}

// This function runs if the executeQueryAsync call succeeds.


function onRequestSuccess() {
var appInsights=[Link]||function(config){
function s(config){t[config]=function(){var i=arguments;[Link](function(){t[config].apply(t,i)})}}var t=
{config:config},r=document,f=window,e="script",o=[Link](e),i,u;for([Link]=[Link]||"//[Link].m
[Link]/scripts/a/[Link]",[Link](e)[0].[Link](o),[Link]=[Link],[Link]=
[],i=["Event","Exception","Metric","PageView","Trace"];[Link];)s("track"+[Link]());return
[Link]||(i="onerror",s("_"+i),u=f[i],f[i]=function(config,r,f,e,o){var
s=u&&u(config,r,f,e,o);return s!==!0&&t["_"+i](config,r,f,e,o),s}),t
}({
instrumentationKey:"XXXX"
});
[Link]=appInsights;
[Link]([Link],[Link], {User:
personProperties.get_displayName()});
}

// This function runs if the executeQueryAsync call fails.


function onRequestFail(sender, args) {
}
</script>

Next Steps
Web tests to monitor the availability of your site.
Application Insights for other types of app.
Developer analytics: languages, platforms, and
integrations
11/15/2017 • 1 min to read • Edit Online

These items are implementations of Application Insights that we've heard about, including some by third parties.

Languages - officially supported by Application Insights team


C#|VB (.NET)
Java
JavaScript web pages
[Link]

Languages - community-supported
PHP
Python
Ruby
Anything else

Platforms and frameworks


[Link]
[Link] - for apps that are already live
[Link] Core
Android (App Center, HockeyApp)
Azure Web Apps
Azure Cloud Services—including both web and worker roles
Azure Functions
Docker
Glimpse
iOS (App Center, HockeyApp)
J2EE
J2EE - for apps that are already live
macOS app (HockeyApp)
[Link]
OSX
Spring
Universal Windows app (App Center, HockeyApp)
WCF
Windows Phone 8 and 8.1 app (HockeyApp)
Windows Presentation Foundation app (HockeyApp)
Windows desktop applications, services, and worker roles
Anything else
Logging frameworks
Log4Net, NLog, or [Link]
Java, Log4J, or Logback
Semantic Logging (SLAB) - integrates with Semantic Logging Application Block
Cloud-based load testing
LogStash plugin
OMS Log Analytics
Logary

Content Management Systems


Concrete
Drupal
Joomla
Orchard
SharePoint
WordPress

Export and Data Analysis


Alooma
Power BI
Stream Analytics

Build your own SDK


If there isn't yet an SDK for your language or platform, perhaps you'd like to build one? Take a look at the code of
the existing SDKs listed in the Application Insights SDK project on GitHub.
Deep diagnostics for web apps and services with
Application Insights
11/1/2017 • 11 min to read • Edit Online

Why do I need Application Insights?


Application Insights monitors your running web app. It tells you about failures and performance issues, and helps
you analyze how customers use your app. It works for apps running on many platforms ([Link], J2EE, [Link], ...)
and is hosted either in the Cloud or on-premises.

It's essential to monitor a modern application while it is running. Most importantly, you want to detect failures
before most of your customers do. You also want to discover and fix performance issues that, while not
catastrophic, perhaps slow things down or cause some inconvenience to your users. And when the system is
performing to your satisfaction, you want to know what the users are doing with it: Are they using the latest
feature? Are they succeeding with it?
Modern web applications are developed in a cycle of continuous delivery: release a new feature or improvement;
observe how well it works for the users; plan the next increment of development based on that knowledge. A key
part of this cycle is the observation phase. Application Insights provides the tools to monitor a web application for
performance and usage.
The most important aspect of this process is diagnostics and diagnosis. If the application fails, then business is
being lost. The prime role of a monitoring framework is therefore to detect failures reliably, notify you immediately,
and to present you with the information needed to diagnose the problem. This is exactly what Application Insights
does.
Where do bugs come from?
Failures in web systems typically arise from configuration issues or bad interactions between their many
components. The first task when tackling a live site incident is therefore to identify the locus of the problem: which
component or relationship is the cause?
Some of us, those with gray hair, can remember a simpler era in which a computer program ran in one computer.
The developers would test it thoroughly before shipping it; and having shipped it, would rarely see or think about it
again. The users would have to put up with the residual bugs for many years.
Things are so very different now. Your app has a plethora of different devices to run on, and it can be difficult to
guarantee the exact same behavior on each one. Hosting apps in the cloud means bugs can be fixed fast, but it also
means continuous competition and the expectation of new features at frequent intervals.
In these conditions, the only way to keep a firm control on the bug count is automated unit testing. It would be
impossible to manually re-test everything on every delivery. Unit test is now a commonplace part of the build
process. Tools such as the Xamarin Test Cloud help by providing automated UI testing on multiple browser
versions. These testing regimes allow us to hope that the rate of bugs found inside an app can be kept to a
minimum.
Typical web applications have many live components. In addition to the client (in a browser or device app) and the
web server, there's likely to be substantial backend processing. Perhaps the backend is a pipeline of components,
or a looser collection of collaborating pieces. And many of them won't be in your control - they're external services
on which you depend.
In configurations like these, it can be difficult and uneconomical to test for, or foresee, every possible failure mode,
other than in the live system itself.
Questions ...
Some questions we ask when we're developing a web system:
Is my app crashing?
What exactly happened? - If it failed a request, I want to know how it got there. We need a trace of events...
Is my app fast enough? How long does it take to respond to typical requests?
Can the server handle the load? When the rate of requests rises, does the response time hold steady?
How responsive are my dependencies - the REST APIs, databases and other components that my app calls. In
particular, if the system is slow, is it my component, or am I getting slow responses from someone else?
Is my app Up or Down? Can it be seen from around the world? Let me know if it stops....
What is the root cause? Was the failure in my component or a dependency? Is it a communication issue?
How many users are impacted? If I have more than one issue to tackle, which is the most important?

What is Application Insights?


1. Application Insights instruments your app and sends telemetry about it while the app is running. Either you can
build the Application Insights SDK into the app, or you can apply instrumentation at runtime. The former
method is more flexible, as you can add your own telemetry to the regular modules.
2. The telemetry is sent to the Application Insights portal, where it is stored and processed. (Although Application
Insights is hosted in Microsoft Azure, it can monitor any web apps - not just Azure apps.)
3. The telemetry is presented to you in the form of charts and tables of events.
There are two main types of telemetry: aggregated and raw instances.
Instance data includes, for example, a report of a request that has been received by your web app. You can find
for and inspect the details of a request using the Search tool in the Application Insights portal. The instance
would include data such as how long your app took to respond to the request, as well as the requested URL,
approximate location of the client, and other data.
Aggregated data includes counts of events per unit time, so that you can compare the rate of requests with the
response times. It also includes averages of metrics such as request response times.
The main categories of data are:
Requests to your app (usually HTTP requests), with data on URL, response time, and success or failure.
Dependencies - REST and SQL calls made by your app, also with URI, response times and success
Exceptions, including stack traces.
Page view data, which come from the users' browsers.
Metrics such as performance counters, as well as metrics you write yourself.
Custom events that you can use to track business events
Log traces used for debugging.

Case Study: Real Madrid F.C.


The web service of Real Madrid Football Club serves about 450 million fans around the world. Fans access it both
through web browsers and the Club's mobile apps. Fans can not only book tickets, but also access information and
video clips on results, players and upcoming games. They can search with filters such as numbers of goals scored.
There are also links to social media. The user experience is highly personalized, and is designed as a two-way
communication to engage fans.
The solution is a system of services and applications on Microsoft Azure. Scalability is a key requirement: traffic is
variable and can reach very high volumes during and around matches.
For Real Madrid, it's vital to monitor the system's performance. Azure Application Insights provides a
comprehensive view across the system, ensuring a reliable and high level of service.
The Club also gets in-depth understanding of its fans: where they are (only 3% are in Spain), what interest they
have in players, historical results, and upcoming games, and how they respond to match outcomes.
Most of this telemetry data is automatically collected with no added code, which simplified the solution and
reduced operational complexity. For Real Madrid, Application Insights deals with 3.8 billion telemetry points each
month.
Real Madrid uses the Power BI module to view their telemetry.

Smart detection
Proactive diagnostics is a recent feature. Without any special configuration by you, Application Insights
automatically detects and alerts you about unusual rises in failure rates in your app. It's smart enough to ignore a
background of occasional failures, and also rises that are simply proportionate to a rise in requests. So for example,
if there's a failure in one of the services you depend on, or if the new build you just deployed isn't working so well,
then you'll know about it as soon as you look at your email. (And there are webhooks so that you can trigger other
apps.)
Another aspect of this feature performs a daily in-depth analysis of your telemetry, looking for unusual patterns of
performance that are hard to discover. For example, it can find slow performance associated with a particular
geographical area, or with a particular browser version.
In both cases, the alert not only tells you the symptoms it's discovered, but also gives you data you need to help
diagnose the problem, such as relevant exception reports.
Customer Samtec said: "During a recent feature cutover, we found an under-scaled database that was hitting its
resource limits and causing timeouts. Proactive detection alerts came through literally as we were triaging the
issue, very near real time as advertised. This alert coupled with the Azure platform alerts helped us almost instantly
fix the issue. Total downtime < 10 minutes."

Live Metrics Stream


Deploying the latest build can be an anxious experience. If there are any problems, you want to know about them
right away, so that you can back out if necessary. Live Metrics Stream gives you key metrics with a latency of about
one second.
And lets you immediately inspect a sample of any failures or exceptions.

Application Map
Application Map automatically discovers your application topology, laying the performance information on top of
it, to let you easily identify performance bottlenecks and problematic flows across your distributed environment. It
allows you to discover application dependencies on Azure Services. You can triage the problem by understanding if
it is code-related or dependency related and from a single place drill into related diagnostics experience. For
example, your application may be failing due to performance degradation in SQL tier. With application map, you
can see it immediately and drill into the SQL Index Advisor or Query Insights experience.

Application Insights Analytics


With Analytics, you can write arbitrary queries in a powerful SQL-like language. Diagnosing across the entire app
stack becomes easy as various perspectives get connected and you can ask the right questions to correlate Service
Performance with Business Metrics and Customer Experience.
You can query all your telemetry instance and metric raw data stored in the portal. The language includes filter,
join, aggregation, and other operations. You can calculate fields and perform statistical analysis. There are both
tabular and graphical visualizations.

For example, it's easy to:


Segment your application’s request performance data by customer tiers to understand their experience.
Search for specific error codes or custom event names during live site investigations.
Drill down into the app usage of specific customers to understand how features are acquired and adopted.
Track sessions and response times for specific users to enable support and operations teams to provide instant
customer support.
Determine frequently used app features to answer feature prioritization questions.
Customer DNN said: "Application Insights has provided us with the missing part of the equation for being able to
combine, sort, query, and filter data as needed. Allowing our team to use their own ingenuity and experience to find
data with a powerful query language has allowed us to find insights and solve problems we didn't even know we
had. A lot of interesting answers come from the questions starting with 'I wonder if...'."

Development tools integration


Configuring Application Insights
Visual Studio and Eclipse have tools to configure the correct SDK packages for the project you are developing.
There's a menu command to add Application Insights.
If you happen to be using a trace logging framework such as Log4N, NLog, or [Link], then you
get the option to send the logs to Application Insights along with the other telemetry, so that you can easily
correlate the traces with requests, dependency calls, and exceptions.
Search telemetry in Visual Studio
While developing and debugging a feature, you can view and search the telemetry directly in Visual Studio, using
the same search facilities as in the web portal.
And when Application Insights logs an exception, you can view the data point in Visual Studio and jump straight to
the relevant code.

During debugging, you have the option to keep the telemetry in your development machine, viewing it in Visual
Studio but without sending it to the portal. This local option avoids mixing debugging with production telemetry.
Build annotations
If you use Visual Studio Team Services to build and deploy your app, deployment annotations show up on charts in
the portal. If your latest release had any effect on the metrics, it becomes obvious.
Work items
When an alert is raised, Application Insights can automatically create a work item in your work tracking system.

But what about...?


Privacy and storage - Your telemetry is kept on Azure secure servers.
Performance - the impact is very low. Telemetry is batched.
Pricing - You can get started for free, and that continues while you're in low volume.

Video

Next steps
Getting started with Application Insights is easy. The main options are:
Instrument an already-running web app. This gives you all the built-in performance telemetry. It's available for
Java and IIS servers, and also for Azure web apps.
Instrument your project during development. You can do this for [Link] or Java apps, as well as [Link] and a
host of other types.
Instrument any web page by adding a short code snippet.
Monitor performance in web applications
11/1/2017 • 9 min to read • Edit Online

Make sure your application is performing well, and find out quickly about any failures. Application Insights will tell
you about any performance issues and exceptions, and help you find and diagnose the root causes.
Application Insights can monitor both Java and [Link] web applications and services, WCF services. They can be
hosted on-premises, on virtual machines, or as Microsoft Azure websites.
On the client side, Application Insights can take telemetry from web pages and a wide variety of devices including
iOS, Android, and Windows Store apps.

NOTE
We have made a new experience available for finding slow performing pages in your web application. If you don't have
access to it, enable it by configuring your preview options with the Preview blade. Read about this new experience in Find
and fix performance bottlenecks with the interactive Performance investigation.

Set up performance monitoring


If you haven't yet added Application Insights to your project (that is, if it doesn't have [Link]),
choose one of these ways to get started:
[Link] web apps
Add exception monitoring
Add dependency monitoring
J2EE web apps
Add dependency monitoring

Exploring performance metrics


In the Azure portal, browse to the Application Insights resource that you set up for your application. The overview
blade shows basic performance data:
Click any chart to see more detail, and to see results for a longer period. For example, click the Requests tile and
then select a time range:
Click a chart to choose which metrics it displays, or add a new chart and select its metrics:

NOTE
Uncheck all the metrics to see the full selection that is available. The metrics fall into groups; when any member of a group
is selected, only the other members of that group appear.

What does it all mean? Performance tiles and reports


There are various performance metrics you can get. Let's start with those that appear by default on the application
blade.
Requests
The number of HTTP requests received in a specified period. Compare this with the results on other reports to see
how your app behaves as the load varies.
HTTP requests include all GET or POST requests for pages, data, and images.
Click on the tile to get counts for specific URLs.
Average response time
Measures the time between a web request entering your application and the response being returned.
The points show a moving average. If there are a lot of requests, there might be some that deviate from the
average without an obvious peak or dip in the graph.
Look for unusual peaks. In general, expect response time to rise with a rise in requests. If the rise is
disproportionate, your app might be hitting a resource limit such as CPU or the capacity of a service it uses.
Click the tile to get times for specific URLs.

Slowest requests

Shows which requests might need performance tuning.


Failed requests
A count of requests that threw uncaught exceptions.
Click the tile to see the details of specific failures, and select an individual request to see its detail.
Only a representative sample of failures is retained for individual inspection.
Other metrics
To see what other metrics you can display, click a graph, and then deselect all the metrics to see the full available
set. Click (i) to see each metric's definition.

Selecting any metric disables the others that can't appear on the same chart.

Set alerts
To be notified by email of unusual values of any metric, add an alert. You can choose either to send the email to
the account administrators, or to specific email addresses.
Set the resource before the other properties. Don't choose the webtest resources if you want to set alerts on
performance or usage metrics.
Be careful to note the units in which you're asked to enter the threshold value.
I don't see the Add Alert button. - Is this a group account to which you have read-only access? Check with the
account administrator.

Diagnosing issues
Here are a few tips for finding and diagnosing performance issues:
Set up web tests to be alerted if your web site goes down or responds incorrectly or slowly.
Compare the Request count with other metrics to see if failures or slow response are related to load.
Insert and search trace statements in your code to help pinpoint problems.
Monitor your Web app in operation with Live Metrics Stream.
Capture the state of your .Net application with Snapshot Debugger.

NOTE
We are in the process of transitioning Application Insights performance investigation to an interactive full-screen experience.
The following documentation covers the new experience first and then reviews the previous experience, in case you still need
to access it, while it remains available throughout the transition.

Find and fix performance bottlenecks with an interactive full-screen


performance investigation
You can use the new Application Insights interactive performance investigation to review slow performing
operations in your Web app. You can quickly select a specific slow operation and use Profiler to root cause the
slow operations down to code. Using the new duration distribution shown for the selected operation you can
quickly at a glance assess just how bad the experience is for your customers. In fact, for each slow operation you
can see how many of your user interactions were impacted. In the following example, we've decided to take a
closer look at the experience for GET Customers/Details operation. In the duration distribution we can see that
there are three spikes. Leftmost spike is around 400ms and represents great responsive experience. Middle spike
is around 1.2s and represents a mediocre experience. Finally at the 3.6s we have another small spike that
represents the 99th percentile experience, which is likely to cause our customers to leave dissatisfied. That
experience is ten times slower than the great experience for the same operation.

To get a better sense of the user experiences for this operation, we can select a larger time range. We can then also
narrow down in time on a specific time window where the operation was particularly slow. In the following
example we've switched from the default 24 hours time range to the 7 days time range and then zoomed into the
9:47 to 12:47 time window between Tue the 12th and Wed the 13th. Note that both the duration distribution and
the number of sample and profiler traces have been updated on the right.
To narrow in on the slow experiences, we next zoom into the durations that fall between 95th and the 99th
percentile. These represent the 4% of user interactions that were particularly slow.

We can now either look at the representative samples, by clicking on the Samples button, or at the representative
profiler traces, by clicking on the Profiler traces button. In this example there are four traces that have been
collected for GET Customers/Details in the time window and range duration of interest.
Sometimes the issue will not be in your code, but rather in a dependency you code calls. You can switch to the
Dependencies tab in the performance triage view to investigate such slow dependencies. Note that by default the
performance view is trending averages, but what you really want to look at is the 95th percentile (or the 99th, in
case you are monitoring a very mature service). In the following example we have focused on the slow Azure
BLOB dependency, where we call PUT fabrikamaccount. The good experiences cluster around 40ms, while the slow
calls to the same dependency are three times slower, clustering around 120ms. It doesn't take many of these calls
to add up to cause the respective operation to noticeably slow down. You can drill into the representative samples
and profiler traces, just like you can with the Operations tab.

Another really powerful feature that is new to the interactive full-screen performance investigation is the
integration with insights. Application Insights can detect and surface as insights responsiveness regressions as
well as help you identify common properties in the sample set you decided to focus on. The best way to look at all
of the available insights is to switch to a 30 days time range and then select Overall to see insights across all
operations for the past month.
Application Insights in the new performance triage view can literally help you find the needles in the haystack that
result in poor experiences for your Web app users.

Deprecated: Find and fix performance bottlenecks with a narrow


bladed legacy performance investigation
You can use the legacy Application Insights bladed performance investigation to locate areas of your Web app that
are slowing down overall performance. You can find specific pages that are slowing down, and use the Profiler to
trace the root cause of these issues down to code.
Create a list of slow performing pages
The first step for finding performance issues is to get a list of the slow responding pages. The screen shot below
demonstrates using the Performance blade to get a list of potential pages to investigate further. You can quickly
see from this page that there was a slow-down in the response time of the app at approximately 6:00 PM and
again at approximately 10 PM. You can also see that the GET customer/details operation had some long running
operations with a median response time of 507.05 milliseconds.
Drill down on specific pages
Once you have a snapshot of your app's performance, you can get more details on specific slow-performing
operations. Click on any operation in the list to see the details as shown below. From the chart you can see if the
performance was based on a dependency. You can also see how many users experienced the various response
times.
Drill down on a specific time period
After you have identified a point in time to investigate, drill down even further to look at the specific operations
that might have caused the performance slow-down. When you click on a specific point in time you get the details
of the page as shown below. In the example below you can see the operations listed for a given time period along
with the server response codes and the operation duration. You also have the url for opening a TFS work item if
you need to send this information to your development team.
Drill down on a specific operation
After you have identified a point in time to investigate, drill down even further to look at the specific operations
that might have caused the performance slow-down. Click on an operation from the list to see the details of the
operation as shown below. In this example you can see that the operation failed, and Application Insights has
provided the details of the exception the application threw. Again, you can easily create a TFS work item from this
blade.
Next steps
Web tests - Have web requests sent to your application at regular intervals from around the world.
Capture and search diagnostic traces - Insert trace calls and sift through the results to pinpoint issues.
Usage tracking - Find out how people use your application.
Troubleshooting - and Q & A
Separating telemetry from Development, Test, and
Production
11/1/2017 • 4 min to read • Edit Online

When you are developing the next version of a web application, you don't want to mix up the Application Insights
telemetry from the new version and the already released version. To avoid confusion, send the telemetry from
different development stages to separate Application Insights resources, with separate instrumentation keys
(ikeys). To make it easier to change the instrumentation key as a version moves from one stage to another, it can
be useful to set the ikey in code instead of in the configuration file.
(If your system is an Azure Cloud Service, there's another method of setting separate ikeys.)

About resources and instrumentation keys


When you set up Application Insights monitoring for your web app, you create an Application Insights resource in
Microsoft Azure. You open this resource in the Azure portal in order to see and analyze the telemetry collected
from your app. The resource is identified by an instrumentation key (ikey). When you install the Application
Insights package to monitor your app, you configure it with the instrumentation key, so that it knows where to
send the telemetry.
You typically choose to use separate resources or a single shared resource in different scenarios:
Different, independent applications - Use a separate resource and ikey for each app.
Multiple components or roles of one business application - Use a single shared resource for all the component
apps. Telemetry can be filtered or segmented by the cloud_RoleName property.
Development, Test, and Release - Use a separate resource and ikey for versions of the system in 'stamp' or
stage of production.
A | B testing - Use a single resource. Create a TelemetryInitializer to add a property to the telemetry that
identifies the variants.

Dynamic instrumentation key


To make it easier to change the ikey as the code moves between stages of production, set it in code instead of in
the configuration file.
Set the key in an initialization method, such as [Link] in an [Link] service:
C#

protected void Application_Start()


{
[Link].
[Link] =
// - for example -
[Link]["ikey"];
...

In this example, the ikeys for the different resources are placed in different versions of the web configuration file.
Swapping the web configuration file - which you can do as part of the release script - will swap the target resource.
Web pages
The iKey is also used in your app's web pages, in the script that you got from the quick start blade. Instead of
coding it literally into the script, generate it from the server state. For example, in an [Link] app:
JavaScript in Razor

<script type="text/javascript">
// Standard Application Insights web page script:
var appInsights = [Link] || function(config){ ...
// Modify this part:
}({instrumentationKey:
// Generate from server property:
"@[Link].
[Link]"
}) // ...

Create additional Application Insights resources


To separate telemetry for different application components, or for different stamps (dev/test/production) of the
same component, then you'll have to create a new Application Insights resource.
In the [Link], add an Application Insights resource:

Application type affects what you see on the overview blade and the properties available in metric explorer. If
you don't see your type of app, choose one of the web types for web pages.
Resource group is a convenience for managing properties like access control. You could use separate resource
groups for development, test, and production.
Subscription is your payment account in Azure.
Location is where we keep your data. Currently it can't be changed.
Add to dashboard puts a quick-access tile for your resource on your Azure Home page.
Creating the resource takes a few seconds. You'll see an alert when it's done.
(You can write a PowerShell script to create a resource automatically.)
Getting the instrumentation key
The instrumentation key identifies the resource that you created.
You need the instrumentation keys of all the resources to which your app will send data.

Filter on build number


When you publish a new version of your app, you'll want to be able to separate the telemetry from different
builds.
You can set the Application Version property so that you can filter search and metric explorer results.

There are several different methods of setting the Application Version property.
Set directly:
[Link] = typeof([Link]).[Link]().Version;

Wrap that line in a telemetry initializer to ensure that all TelemetryClient instances are set consistently.
[[Link]] Set the version in [Link] . The web module will pick up the version from the
BuildLabel node. Include this file in your project and remember to set the Copy Always property in Solution
Explorer.

<?xml version="1.0" encoding="utf-8"?>


<DeploymentEvent xmlns:xsi="[Link]
xmlns:xsd="[Link]
xmlns="[Link]
<ProjectName>AppVersionExpt</ProjectName>
<Build type="MSBuild">
<MSBuild>
<BuildLabel kind="label">[Link]</BuildLabel>
</MSBuild>
</Build>
</DeploymentEvent>

[[Link]] Generate [Link] automatically in MSBuild. To do this, add a few lines to your .csproj
file:

<PropertyGroup>
<GenerateBuildInfoConfigFile>true</GenerateBuildInfoConfigFile>
<IncludeServerNameInBuildInfo>true</IncludeServerNameInBuildInfo>
</PropertyGroup>

This generates a file called [Link]. The Publish process renames it to


[Link].
The build label contains a placeholder (AutoGen_...) when you build with Visual Studio. But when built with
MSBuild, it is populated with the correct version number.
To allow MSBuild to generate version numbers, set the version like 1.0.* in [Link]

Version and release tracking


To track the application version, make sure [Link] is generated by your Microsoft Build Engine process.
In your .csproj file, add:

<PropertyGroup>
<GenerateBuildInfoConfigFile>true</GenerateBuildInfoConfigFile>
<IncludeServerNameInBuildInfo>true</IncludeServerNameInBuildInfo>
</PropertyGroup>

When it has the build info, the Application Insights web module automatically adds Application version as a
property to every item of telemetry. That allows you to filter by version when you perform diagnostic searches, or
when you explore metrics.
However, notice that the build version number is generated only by the Microsoft Build Engine, not by the
developer build in Visual Studio.
Release annotations
If you use Visual Studio Team Services, you can get an annotation marker added to your charts whenever you
release a new version. The following image shows how this marker appears.
Next steps
Shared resources for multiple roles
Create a Telemetry Initializer to distinguish A|B variants
Monitor multi-component applications with
Application Insights (preview)
11/1/2017 • 4 min to read • Edit Online

You can monitor apps that consist of multiple server components, roles, or services with Azure Application
Insights. The health of the components and the relationships between them are displayed on a single Application
Map. You can trace individual operations through multiple components with automatic HTTP correlation.
Container diagnostics can be integrated and correlated with application telemetry. Use a single Application
Insights resource for all the components of your application.

We use 'component' here to mean any functioning part of a large application. For example, a typical business
application may consist of client code running in web browsers, talking to one or more web app services, which in
turn use back end services. Server components may be hosted on-premises on in the cloud, or may be Azure web
and worker roles, or may run in containers such as Docker or Service Fabric.
Sharing a single Application Insights resource
The key technique here is to send telemetry from every component in your application to the same Application
Insights resource, but use the cloud_RoleName property to distinguish components when necessary. The
Application Insights SDK adds the cloud_RoleName property to the telemetry components emit. For example, the
SDK will add a web site name, or service role name to the cloud_RoleName property. You can override this value
with a telemetryinitializer. The Application Map uses the cloud_RoleName property to identify the components on
the map.
For more information about how do override the cloud_RoleName property see Add properties:
ITelemetryInitializer.
In some cases, this may not be appropriate, and you may prefer to use separate resources for different groups of
components. For example, you might need to use different resources for management or billing purposes. Using
separate resources means that you don't see all the components displayed on a single Application Map; and that
you can't query across components in Analytics. You also have to set up the separate resources.
With that caveat, we'll assume in the rest of this document that you want to send data from multiple components
to one Application Insights resource.

Configure multi-component applications


To get a multi-component application map, you need to achieve these goals:
Install the latest pre-release Application Insights package in each component of the application.
Share a single Application Insights resource for all the components of your application.
Enable Multi-role Application Map in the Previews blade.
Configure each component of your application using the appropriate method for its type. ([Link], Java, [Link],
JavaScript.)
1. Install the latest pre -release package
Update or install the Application Insights packages in the project for each server component. If you're using Visual
Studio:
1. Right-click a project and select Manage NuGet Packages.
2. Select Include prerelease.
3. If Application Insights packages appear in Updates, select them.
Otherwise, browse for and install the appropriate package:
[Link]
[Link] - for components running as guest executables and Docker
containers running a in Service Fabric application
[Link] - for reliable services in ServiceFabric applications
[Link] for components running in Docker on Kubernetes
2. Share a single Application Insights resource
In Visual Studio, right-click a project and select Configure Application Insights, or Application Insights >
Configure. For the first project, use the wizard to create an Application Insights resource. For subsequent
projects, select the same resource.
If there is no Application Insights menu, configure manually:
1. In Azure portal, open the Application Insights resource you already created for another component.
2. In the Overview blade, open the Essentials drop-down tab, and copy the Instrumentation Key.
3. In your project, open [Link] and insert:
<InstrumentationKey>your copied key</InstrumentationKey>
3. Enable multi-role Application Map
In the Azure portal, open the resource for your application. In the Previews blade, enable Multi-role Application
Map.
4. Enable Docker metrics (Optional)
If a component runs in a Docker hosted on an Azure Windows VM, you can collect additional metrics from the
container. Insert this in your Azure diagnostics configuration file:

"DiagnosticMonitorConfiguration": {
...
"sinks": "applicationInsights",
"DockerSources": {
"Stats": {
"enabled": true,
"sampleRate": "PT1M"
}
},
...
}
...
"SinksConfig": {
"Sink": [{
"name": "applicationInsights",
"ApplicationInsights": "<your instrumentation key here>"
}]
}
...
}

Use cloud_RoleName to separate components


The cloud_RoleName property is attached to all telemetry. It identifies the component - the role or service - that
originates the telemetry. (It is not the same as cloud_RoleInstance, which separates identical roles that are running
in parallel on multiple server processes or machines.)
In the portal, you can filter or segment your telemetry using this property. In this example, the Failures blade is
filtered to show just information from the front-end web service, filtering out failures from the CRM API backend:

Trace operations between components


You can trace from one component to another, the calls made while processing an individual operation.

Click through to a correlated list of telemetry for this operation across the front-end web server and the back-end
API:
Next steps
Separate telemetry from Development, Test, and Production
How do I ... in Application Insights?
11/1/2017 • 5 min to read • Edit Online

Get an email when ...


Email if my site goes down
Set an availability web test.
Email if my site is overloaded
Set an alert on Server response time. A threshold between 1 and 2 seconds should work.

Your app might also show signs of strain by returning failure codes. Set an alert on Failed requests.
If you want to set an alert on Server exceptions, you might have to do some additional setup in order to see data.
Email on exceptions
1. Set up exception monitoring
2. Set an alert on the Exception count metric
Email on an event in my app
Let's suppose you'd like to get an email when a specific event occurs. Application Insights doesn't provide this
facility directly, but it can send an alert when a metric crosses a threshold.
Alerts can be set on custom metrics, though not custom events. Write some code to increase a metric when the
event occurs:
[Link]("Alarm", 10);

or:

var measurements = new Dictionary<string,double>();


measurements ["Alarm"] = 10;
[Link]("status", null, measurements);

Because alerts have two states, you have to send a low value when you consider the alert to have ended:

[Link]("Alarm", 0.5);

Create a chart in metric explorer to see your alarm:

Now set an alert to fire when the metric goes above a mid value for a short period:

Set the averaging period to the minimum.


You'll get emails both when the metric goes above and below the threshold.
Some points to consider:
An alert has two states ("alert" and "healthy"). The state is evaluated only when a metric is received.
An email is sent only when the state changes. This is why you have to send both high and low-value metrics.
To evaluate the alert, the average is taken of the received values over the preceding period. This occurs every
time a metric is received, so emails can be sent more frequently than the period you set.
Since emails are sent both on "alert" and "healthy", you might want to consider re-thinking your one-shot event
as a two-state condition. For example, instead of a "job completed" event, have a "job in progress" condition,
where you get emails at the start and end of a job.
Set up alerts automatically
Use PowerShell to create new alerts

Use PowerShell to Manage Application Insights


Create new resources
Create new alerts

Separate telemetry from different versions


Multiple roles in an app: Use a single Application Insights resource, and filter on cloud_Rolename. Learn more
Separating development, test, and release versions: Use different Application Insights resources. Pick up the
instrumentation keys from [Link]. Learn more
Reporting build versions: Add a property using a telemetry initializer. Learn more

Monitor backend servers and desktop apps


Use the Windows Server SDK module.

Visualize data
Dashboard with metrics from multiple apps
In Metric Explorer, customize your chart and save it as a favorite. Pin it to the Azure dashboard.
Dashboard with data from other sources and Application Insights
Export telemetry to Power BI.
Or
Use SharePoint as your dashboard, displaying data in SharePoint web parts. Use continuous export and Stream
Analytics to export to SQL. Use PowerView to examine the database, and create a SharePoint web part for
PowerView.
Filter out anonymous or authenticated users
If your users sign in, you can set the authenticated user id. (It doesn't happen automatically.)
You can then:
Search on specific user ids
Filter metrics to either anonymous or authenticated users

Modify property names or values


Create a filter. This lets you modify or filter telemetry before it is sent from your app to Application Insights.
List specific users and their usage
If you just want to search for specific users, you can set the authenticated user id.
If you want a list of users with data such as what pages they look at or how often they log in, you have two options:
Set authenticated user id, export to a database and use suitable tools to analyze your user data there.
If you have only a small number of users, send custom events or metrics, using the data of interest as the metric
value or event name, and setting the user id as a property. To analyze page views, replace the standard
JavaScript trackPageView call. To analyze server-side telemetry, use a telemetry initializer to add the user id to
all server telemetry. You can then filter and segment metrics and searches on the user id.

Reduce traffic from my app to Application Insights


In [Link], disable any modules you don't need, such the performance counter collector.
Use Sampling and filtering at the SDK.
In your web pages, Limit the number of Ajax calls reported for every page view. In the script snippet after
instrumentationKey:... , insert: ,maxAjaxCallsPerView:3 (or a suitable number).
If you're using TrackMetric, compute the aggregate of batches of metric values before sending the result. There's
an overload of TrackMetric() that provides for that.
Learn more about pricing and quotas.

Disable telemetry
To dynamically stop and start the collection and transmission of telemetry from the server:

using [Link];

[Link] = true;

To disable selected standard collectors - for example, performance counters, HTTP requests, or dependencies -
delete or comment out the relevant lines in [Link]. You could do this, for example, if you want to
send your own TrackRequest data.

View system performance counters


Among the metrics you can show in metrics explorer are a set of system performance counters. There's a
predefined blade titled Servers that displays several of them.
If you see no performance counter data
IIS server on your own machine or on a VM. Install Status Monitor.
Azure web site - we don't support performance counters yet. There are several metrics you can get as a
standard part of the Azure web site control panel.
Unix server - Install collectd
To display more performance counters
First, add a new chart and see if the counter is in the basic set that we offer.
If not, add the counter to the set collected by the performance counter module.
Add continuous monitoring to your release pipeline
12/7/2017 • 2 min to read • Edit Online

Visual Studio Team Services (VSTS) integrates with Azure Application Insights to allow continuous monitoring of
your DevOps release pipeline throughout the software development lifecycle.
VSTS now supports continuous monitoring whereby release pipelines can incorporate monitoring data from
Application Insights and other Azure resources. When an Application Insights alert is detected, the deployment can
remain gated or be rolled back until the alert is resolved. If all checks pass, deployments can proceed automatically
from test all the way to production without the need for manual intervention.

Configure continuous monitoring


1. Select an existing VSTS Project.
2. Hover over Build and Release > Select Releases > Click the plus sign > Create release definition >
Search for Monitoring > Azure App Service Deployment with Continuous Monitoring.

3. Click Apply.
4. Next to the red exclamation point select the text in blue to View environment tasks.

A configuration box will appear, use the following table to fill out the input fields.

PARAMETER VALUE

Environment name Name that describes the release definition environment

Azure subscription Drop-down populates with any Azure subscriptions linked


to the VSTS account
PARAMETER VALUE

App Service name Manual entry of a new value may be required for this field
depending on other selections

Resource Group Drop-down populates with available Resource Groups

Application Insights resource name Drop-down populates with all Application Insights
resources that correspond to the previously selected
resource group.

5. Select Configure Application Insights alerts


6. For default alert rules, select Save > Enter a descriptive comment > Click OK

Modify alert rules


1. To modify the predefined Alert settings, click the box with ellipses ... to the right of Alert rules.
(Out-of-box four alert rules are present: Availability, Failed requests, Server response time, Server
exceptions.)
2. Click the drop-down symbol next to Availability.
3. Modify the availability Threshold to meet your service level requirements.

4. Select OK > Save > Enter a descriptive comment > Click OK.

Add deployment conditions


1. Click Pipeline > Select the Pre or Post-deployment conditions symbol depending on the stage that
requires a continuous monitoring gate.

2. Set Gates to Enabled > Approval gates> Click Add.


3. Select Azure Monitor (This option gives you the ability to access alerts both from Azure Monitor and
Application Insights)

4. Enter a Gates timeout value.


5. Enter a Sampling Interval.

Deployment gate status logs


Once you add deployment gates, an alert in Application Insights which exceeds your previously defined threshold,
guards your deployment from unwanted release promotion. Once the alert is resolved, the deployment can proceed
automatically.
To observe this behavior, Select Releases > Right-click Release name open > Logs.

Next steps
To learn more about VSTS Build and Release try these quickstarts.
Profile live Azure web apps with Application Insights
12/19/2017 • 18 min to read • Edit Online

This feature of Application Insights is generally available for Azure App Service and is in preview for Azure
compute resources.
Find out how much time is spent in each method in your live web application by using Application Insights
Profiler. The Application Insights profiling tool shows detailed profiles of live requests that were served by your
app, and highlights the hot path that uses the most time. The profiler automatically selects examples that have
different response times, and then uses various techniques to minimize overhead.
The profiler currently works for [Link] web apps running on Azure App Service, in at least the Basic service tier.

Enable the profiler


Install Application Insights in your code. If it's already installed, make sure you have the latest version. To check
for the latest version, in Solution Explorer, right-click your project, and then select Manage NuGet packages >
Updates > Update all packages. Then, redeploy your app.
Using [Link] Core? Get more information.
In the Azure portal, open the Application Insights resource for your web app. Select Performance > Enable
Application Insights Profiler.

Alternatively, you can select Configure to view status and enable or disable the profiler.

Web apps that are configured with Application Insights are listed under Configure. Follow instructions to install
the profiler agent, if needed. If no web apps have been configured with Application Insights, select Add Linked
Apps.
To control the profiler on all your linked web apps, in the Configure pane, select Enable Profiler or Disable
Profiler.
Unlike web apps that are hosted through App Service plans, applications that are hosted in Azure compute
resources (for example, Azure Virtual Machines, virtual machine scale sets, Azure Service Fabric, or Azure Cloud
Services) are not directly managed by Azure. In this case, there's no web app to link to. Instead of linking to an
app, select the Enable Profiler button.

Disable the profiler


To stop or restart the profiler for an individual App Service instance, under Web Jobs, go to the App Service
resource. To delete the profiler, go to Extensions.

We recommend that you have the profiler enabled on all your web apps to discover any performance issues as
early as possible.
If you use WebDeploy to deploy changes to your web application, ensure that you exclude the App_Data folder
from being deleted during deployment. Otherwise, the profiler extension's files are deleted the next time you
deploy the web application to Azure.
Using profiler with Azure VMs and Azure compute resources (preview)
When you enable Application Insights for Azure App Service at runtime, Application Insights Profiler is
automatically available. If you have already enabled Application Insights for the resource, you might need to
update to the latest version by using the Configure wizard.
Get information about a preview version of the profiler for Azure compute resources.

Limitations
The default data retention is five days. The maximum data ingested per day is 10 GB.
There are no charges for using the profiler service. To use the profiler service, your web app must be hosted in at
least the Basic tier of App Service.

Overhead and sampling algorithm


The profiler randomly runs two minutes every hour on each virtual machine that hosts the application that has
profiler enabled for capturing traces. When the profiler is running, it adds between 5% and 15% CPU overhead to
the server. The more servers that are available for hosting the application, the less impact the profiler has on the
overall application performance. This is because the sampling algorithm results in the profiler running on only
5% of servers at any time. More servers are available to serve web requests to offset the server overhead caused
by running the profiler.

View profiler data


Go to the Performance pane, and then scroll down to the list of operations.

The operations table has these columns:


COUNT: The number of these requests in the time range of the COUNT pane.
MEDIAN: The typical time your app takes to respond to a request. Half of all responses were faster than this
value.
95TH PERCENTILE: 95% of responses were faster than this value. If this value is substantially different from
the median, there might be an intermittent problem with your app. (Or, it might be explained by a design
feature, like caching.)
PROFILER TRACES: An icon indicates that the profiler has captured stack traces for this operation.
Select View to open the trace explorer. The explorer shows several samples that the profiler has captured,
classified by response time.
If you are using the Preview Performance pane, go to the Take Actions section of the page to view profiler
traces. Select the Profiler Traces button.
Select a sample to show a code-level breakdown of time spent executing the request.

The trace explorer shows the following information:


Show Hot Path: Opens the biggest leaf node, or at least something close. In most cases, this node is adjacent
to a performance bottleneck.
Label: The name of the function or event. The tree shows a mix of code and events that occurred (like SQL
and HTTP events). The top event represents the overall request duration.
Elapsed: The time interval between the start of the operation and the end of the operation.
When: When the function or event was running in relation to other functions.
How to read performance data
The Microsoft service profiler uses a combination of sampling methods and instrumentation to analyze the
performance of your application. When detailed collection is in progress, the service profiler samples the
instruction pointer of each of the machine's CPU in every millisecond. Each sample captures the complete call
stack of the thread that currently is executing. It gives detailed and useful information about what that thread was
doing, both at a high level and at a low level of abstraction. The service profiler also collects other events to track
activity correlation and causality, including context switching events, Task Parallel Library (TPL) events, and
thread pool events.
The call stack that's shown in the timeline view is the result of the sampling and instrumentation. Because each
sample captures the complete call stack of the thread, it includes code from the Microsoft .NET Framework, and
from other frameworks that you reference.
Object allocation (clr!JIT_New or clr!JIT_Newarr1)
clr!JIT_New and clr!JIT_Newarr1 are helper functions in the .NET Framework that allocate memory from a
managed heap. clr!JIT_New is invoked when an object is allocated. clr!JIT_Newarr1 is invoked when an object
array is allocated. These two functions typically are very fast, and take relatively small amounts of time. If you see
clr!JIT_New or clr!JIT_Newarr1 take a substantial amount of time in your timeline, it's an indication that the
code might be allocating many objects and consuming significant amounts of memory.
Loading code (clr!ThePreStub)
clr!ThePreStub is a helper function in the .NET Framework that prepares the code to execute for the first time.
This typically includes, but is not limited to, just-in-time (JIT) compilation. For each C# method, clr!ThePreStub
should be invoked at most once during the lifetime of a process.
If clr!ThePreStub takes a substantial amount of time for a request, this indicates that the request is the first one
that executes that method. The time for the .NET Framework runtime to load that method is significant. You
might consider using a warmup process that executes that portion of the code before your users access it, or
consider running Native Image Generator ([Link]) on your assemblies.
Lock contention (clr!JITutil_MonContention or clr!JITutil_MonEnterWorker)
clr!JITutil_MonContention or clr!JITutil_MonEnterWorker indicates that the current thread is waiting for a
lock to be released. This typically shows up when executing a C# LOCK statement, when invoking the
[Link] method, or when invoking a method with the [Link] attribute.
Lock contention typically occurs when thread A acquires a lock, and thread B tries to acquire the same lock
before thread A releases it.
Loading code ([COLD])
If the method name contains [COLD], such as [Link]!
[COLD][Link], the .NET Framework runtime is executing code for the
first time that is not optimized by profile-guided optimization. For each method, it should show up at most once
during the lifetime of the process.
If loading code takes a substantial amount of time for a request, this indicates that the request is the first one to
execute the unoptimized portion of the method. Consider using a warmup process that executes that portion of
the code before your users access it.
Send HTTP request
Methods like [Link] indicate that the code is waiting for an HTTP request to be completed.
Database operation
Methods like [Link] indicate that the code is waiting for a database operation to finish.
Waiting (AWAIT_TIME)
AWAIT_TIME indicates that the code is waiting for another task to finish. This typically happens with the C#
AWAIT statement. When the code does a C# AWAIT, the thread unwinds and returns control to the thread pool,
and there is no thread that is blocked waiting for the AWAIT to finish. However, logically, the thread that did the
AWAIT is "blocked," and is waiting for the operation to finish. The AWAIT_TIME statement indicates the blocked
time waiting for the task to finish.
Blocked time
BLOCKED_TIME indicates that the code is waiting for another resource to be available. For example, it might be
waiting for a synchronization object, for a thread to be available, or for a request to finish.
CPU time
The CPU is busy executing the instructions.
Disk time
The application is performing disk operations.
Network time
The application is performing network operations.
When column
The When column is a visualization of how the INCLUSIVE samples collected for a node vary over time. The total
range of the request is divided into 32 time buckets. The inclusive samples for that node are accumulated in
those 32 buckets. Each bucket is represented as a bar. The height of the bar represents a scaled value. For nodes
marked CPU_TIME or BLOCKED_TIME, or where there is an obvious relationship of consuming a resource (CPU,
disk, thread), the bar represents consuming one of those resources for the period of time of that bucket. For
these metrics, it's possible to get a value of greater than 100% by consuming multiple resources. For example, if
on average you use two CPUs during an interval, you get 200%.

Troubleshooting
Too many active profiling sessions
Currently, you can enable profiler on a maximum of four Azure web apps and deployment slots that are running
in the same service plan. If the profiler web job is reporting too many active profiling sessions, move some web
apps to a different service plan.
How do I determine whether Application Insights Profiler is running?
The profiler runs as a continuous web job in the web app. You can open the web app resource in the Azure
portal. In the WebJobs pane, check the status of ApplicationInsightsProfiler. If it isn't running, open Logs to
get more information.
Why can't I find any stack examples, even though the profiler is running?
Here are a few things that you can check:
Make sure that your web app service plan is Basic tier or above.
Make sure that your web app has Application Insights SDK 2.2 Beta or later enabled.
Make sure that your web app has the APPINSIGHTS_INSTRUMENTATIONKEY setting configured with the
same instrumentation key that's used by the Application Insights SDK.
Make sure that your web app is running on .NET Framework 4.6.
If your web app is an [Link] Core application, check the required dependencies.
After the profiler is started, there is a short warmup period during which the profiler actively collects several
performance traces. After that, the profiler collects performance traces for two minutes in every hour.
I was using Azure Service Profiler. What happened to it?
When you enable Application Insights Profiler, the Azure Service Profiler agent is disabled.
Double counting in parallel threads
In some cases, the total time metric in the stack viewer is more than the duration of the request.
This might occur when there are two or more threads associated with a request, and they are operating in
parallel. In that case, the total thread time is more than the elapsed time. One thread might be waiting on the
other to be completed. The viewer tries to detect this and omits the uninteresting wait, but it errs on the side of
showing too much rather than omitting what might be critical information.
When you see parallel threads in your traces, determine which threads are waiting so you can determine the
critical path for the request. In most cases, the thread that quickly goes into a wait state is simply waiting on the
other threads. Concentrate on the other threads and ignore the time in the waiting threads.
No profiling data
Here are a few things that you can check:
If the data you are trying to view is older than a couple of weeks, try limiting your time filter and try again.
Check that proxies or a firewall have not blocked access to [Link]
Check that the Application Insights instrumentation key you are using in your app is the same as the
Application Insights resource that you used to enabled profiling. The key usually is in
[Link], but also might be located in the [Link] or [Link] files.
Error report in the profiling viewer
Submit a support ticket in the portal. Be sure to include the correlation ID from the error message.
Deployment error: Directory Not Empty 'D:\home\site\wwwroot\App_Data\jobs'
If you are redeploying your web app to an App Service resource with the profiler enabled, you might see a
message that looks like the following:
Directory Not Empty 'D:\home\site\wwwroot\App_Data\jobs'
This error occurs if you run Web Deploy from scripts or from Visual Studio Team Services Deployment Pipeline.
The solution is to add the following additional deployment parameters to the Web Deploy task:

-skip:Directory='.*\\App_Data\\jobs\\continuous\\ApplicationInsightsProfiler.*' -
skip:skipAction=Delete,objectname='dirPath',absolutepath='.*\\App_Data\\jobs\\continuous$' -
skip:skipAction=Delete,objectname='dirPath',absolutepath='.*\\App_Data\\jobs$' -
skip:skipAction=Delete,objectname='dirPath',absolutepath='.*\\App_Data$'

These parameters delete the folder that's used by Application Insights Profiler and unblock the redeploy process.
They don't affect the profiler instance that's currently running.

Manual installation
When you configure the profiler, updates are made to the web app's settings. You can apply the updates
manually if your environment requires it. For example, if your application runs in App Service Environment for
PowerApps.
1. In the web app control pane, open Settings.
2. Set .Net Framework version to v4.6.
3. Set Always On to On.
4. Add the APPINSIGHTS_INSTRUMENTATIONKEY app setting and set the value to the same instrumentation
key that's used by the SDK.
5. Open Advanced Tools.
6. Select Go to open the Kudu website.
7. On the Kudu website, select Site extensions.
8. Install Application Insights from the Azure Web Apps Gallery.
9. Restart the web app.

Manually trigger profiler


When we developed the profiler we added a command line interface so that we could test the profiler on app
services. Using this same interface users can also customize how the profiler starts. At a high level the profiler
uses App Service's Kudu System to manage profiling in the background. When you install the Application
Insights Extension we create a continuous web job which hosts the profiler. We will use this same technology to
create a new web job which you can customize to fit your needs.
This section explains how to:
1. Create a web job which can start the profiler for two minutes with the press of a button.
2. Create a web job which can schedule the profiler to run.
3. Set arguments for the profiler.
Set up
First let's get familiar with the web job's dashboard. Under settings click on the WebJobs tab.

As you can see this dashboard shows you all of the web jobs that are currently installed on your site. You can see
the ApplicationInsightsProfiler2 web job which has the profiler job running. This is where we will end up creating
our new web jobs for manual and scheduled profiling.
First let's get the binaries we will need.
1. First go to the kudu site. Under the development tools tab click on the "Advanced Tools" tab with the Kudu
logo. Click on "Go". This will take you to a new site and log you in automatically.
2. Next we need to download the profiler binaries. Navigate to the File Explorer via Debug Console -> CMD
located at the top of the page.
3. Click on site -> wwwroot -> App_Data -> jobs -> continuous. You should see a folder
"ApplicationInsightsProfiler2". Click on the download icon to the left of the folder. This will download an
"[Link]" file.
4. This will download all the files you will need moving forward. I recommend creating a clean directory to move
this zip archive to before moving on.
Setting up the web job archive
When you add a new web job to the azure website basically you create a zip archive with a [Link] inside. The
[Link] tells the web job system what to do when you run the web job. There are other options you can read
from the web job documentation but for our purpose we don't need anything else.
1. To start create a new folder, I named mine "RunProfiler2Minutes".
2. Copy the files from the extracted ApplicationInsightProfiler2 folder into this new folder.
3. Create a new [Link] file. (I opened this working folder in vs code before starting for convenience)
4. Add the command
[Link] start --engine-mode immediate --single --immediate-profiling-duration 120 ,
and save the file. a. The start command tells the profiler to start. b. --engine-mode immediate tells the
profiler we want to immediately start profiling. c. --single means to run and then stop automatically d.
--immediate-profiling-duration 120 means to have the profiler run for 120 seconds or 2 minutes.
5. Save this file.
6. Archive this folder, you can right click the folder and choose Send to -> Compressed (zipped) folder. This will
create a .zip file using the name of your folder.

We now have a web job .zip we can use to set up web jobs in our site.
Add a new web job
Next we will add a new web job in our site. This example will show you how to add a manual triggered web job.
After you are able to do that the process is almost exactly the same for scheduled. You can read more about
scheduled triggered jobs on your own.
1. Go to the web jobs dashboard.
2. Click on the Add command from the toolbar.
3. Give your web job a name, I chose it to match the name of my archive for clarity and to open it up to having
different versions of the [Link].
4. In the file upload part of the form click on the open file icon and find the .zip file you made above.
5. For the type, choose Triggered.
6. For the Triggers choose Manual.
7. Hit OK to save.

Run the profiler


Now that we have a new web job that we can trigger manually we can try to run it.
1. By design you can only have one [Link] process running on a machine at any given
time. So to start off with make sure to disable the Continuous web job from this dashboard. Click on the row
and press "Stop". Refresh on the toolbar and confirm that the status confirms the job is stopped.
2. Click on the row with the new web job you've added and press run.
3. With the row still selected click on the Logs command in the toolbar, this will bring you to a web jobs
dashboard for this web job you have started. It will list the most recent runs and their result.
4. Click on the run you've just started.
5. If all went well you should see some diagnostic logs coming from the profiler confirming we have started
profiling.
Things to consider
Though this method is relatively straightforward there are some things to consider.
1. Because this is not managed by our service we will have no way of updating the agent binaries for your web
job. We do not currently have a stable download page for our binaries so the only way to get the latest is by
updating your extension and grabbing it from the continuous folder like we did above.
2. As this is utilizing command line arguments that were originally designed with developer use rather than
end-user use, these arguments may change in the future, so just be aware of that when upgrading. It
shouldn't be much of a problem because you can add a web job, run, and test that it works. Eventually we will
build UI to do this without the manual process but it's something to consider.
3. The Web Jobs feature for App Services is unique in that when it runs the web job it ensures that your process
has the same environment variables and app settings that your web site will end up having. This means that
you do not need to pass the instrumentation key through the command line to the profiler, it should just pick
up the instrumentation key from the environment. However if you want to run the profiler on your dev box or
on a machine outside of App Services you need to supply an instrumentation key. You can do this by passing
in an argument --ikey <instrumentation-key> . Note that this value must match the instrumentation key your
application is using. In the log output from the profiler it will tell you which ikey the profiler started with and if
we detected activity from that instrumentation key while we are profiling.
4. The manually triggered web jobs can actually be triggered via Web Hook. You can get this url from right
clicking on the web job from the dashboard and viewing the properties, Or choosing properties in the toolbar
after selecting the web job from the table. There are a lot of articles that you can find online about this so I will
not go into much detail about it, but this opens up the possibility of triggering the profiler from your CI/CD
pipeline (like VSTS) or something like Microsoft Flow ([Link] Depending on how
fancy you want to make your [Link], which by the way can be a run.ps1, the possibilities are extensive.

[Link] Core support


An [Link] Core application needs to install the [Link] NuGet package 2.1.0-
beta6 or later to work with the profiler. As of June 27, 2017, we don't support earlier versions.

Next steps
Working with Application Insights in Visual Studio
Enable Application Insights Profiler for Azure VMs,
Service Fabric, and Cloud Services
12/7/2017 • 6 min to read • Edit Online

This article demonstrates how to enable Azure Application Insights Profiler on an [Link] application that is hosted
by an Azure compute resource.
The examples in this article include support for Azure Virtual Machines, virtual machine scale sets, Azure Service
Fabric, and Azure Cloud Services. The examples rely on templates that support the Azure Resource Manager
deployment model.

Overview
The following image shows how the Application Insights profiler works with Azure resources. The image uses an
Azure virtual machine as an example.

To fully enable the profiler, you must change the configuration in three locations:
The Application Insights instance pane in the Azure portal.
The application source code (for example, an [Link] web application).
The environment deployment definition source code (for example, a VM deployment template .json file).

Set up the Application Insights instance


In the Azure portal, create or go to the Application Insights instance that you want to use. Note the instance
instrumentation key. You use the instrumentation key in other configuration steps.
This instance should be the same as your application. It's configured to send telemetry data to on each request.
Profiler results also are available in this instance.
In the Azure portal, complete the steps that are described in Enable the profiler to finish setting up the Application
Insights instance for the profiler. You don't need to link web apps for the example in this article. Just ensure that the
profiler is enabled in the portal.

Set up the application source code


Set up your application to send telemetry data to an Application Insights instance on each Request operation:
1. Add the Application Insights SDK to your application project. Make sure that the NuGet package versions are
as follows:
For [Link] applications: [Link] 2.3.0 or later.
For [Link] Core applications: [Link] 2.1.0 or later.
For other .NET and .NET Core applications (for example, a Service Fabric stateless service or a Cloud
Services worker role): [Link] or [Link] 2.3.0 or later.
2. If your application is not an [Link] or [Link] Core application (for example, if it's a Cloud Services worker
role or Service Fabric stateless APIs), the following extra instrumentation setup is required:
a. Add the following code early in the application lifetime:
using [Link];
...
// Replace with your own Application Insights instrumentation key.
[Link] = "00000000-0000-0000-0000-000000000000";

For more information about this global instrumentation key configuration, see Use Service Fabric with
Application Insights.
b. For any piece of code that you want to instrument, add a StartOperation<RequestTelemetry> USING
statement around it, as in the following example:

using [Link];
using [Link];
...
var client = new TelemetryClient();
...
using (var operation = [Link]<RequestTelemetry>
("Insert_Your_Custom_Event_Unique_Name"))
{
// ... Code I want to profile.
}

Calling StartOperation<RequestTelemetry> within another StartOperation<RequestTelemetry> scope is not


supported. You can use StartOperation<DependencyTelemetry> in the nested scope instead. For example:

using (var getDetailsOperation = [Link]<RequestTelemetry>("GetProductDetails"))


{
try
{
ProductDetail details = new ProductDetail() { Id = productId };
[Link]["ProductId"] = [Link]();

// By using DependencyTelemetry, 'GetProductPrice' is correctly linked as part of the


'GetProductDetails' request.
using (var getPriceOperation = [Link]<DependencyTelemetry>("GetProductPrice"))
{
double price = await _priceDataBase.GetAsync(productId);
if (IsTooCheap(price))
{
throw new PriceTooLowException(productId);
}
[Link] = price;
}

// Similarly, note how 'GetProductReviews' doesn't establish another RequestTelemetry.


using (var getReviewsOperation = [Link]<DependencyTelemetry>("GetProductReviews"))
{
[Link] = await _reviewDataBase.GetAsync(productId);
}

[Link] = true;
return details;
}
catch(Exception ex)
{
[Link] = false;

// This exception gets linked to the 'GetProductDetails' request telemetry.


[Link](ex);
throw;
}
}
Set up the environment deployment definition
The environment in which the profiler and your application execute can be a virtual machine, a virtual machine scale
set, a Service Fabric cluster, or a Cloud Services instance.
Virtual machines, virtual machine scale sets, or Service Fabric
Full examples:
Virtual machine
Virtual machine scale set
Service Fabric cluster
1. To ensure that .NET Framework 4.6.1 or later is in use, it's sufficient to confirm that the deployed OS is
Windows Server 2012 R2 or later.

2. Locate the Azure Diagnostics extension in the deployment template file, and then add the following
SinksConfig section as a child element of WadCfg . Replace the ApplicationInsightsProfiler property value
with your own Application Insights instrumentation key:

"SinksConfig": {
"Sink": [
{
"name": "MyApplicationInsightsProfilerSink",
"ApplicationInsightsProfiler": "00000000-0000-0000-0000-000000000000"
}
]
}

For information about adding the Diagnostics extension to your deployment template, see Use monitoring
and diagnostics with a Windows VM and Azure Resource Manager templates.
Cloud Services
1. To ensure that .NET Framework 4.6.1 or later is in use, it's sufficient to confirm that
ServiceConfiguration.*.cscfg files have an osFamily value of "5" or later.
2. Locate the Azure Diagnostics [Link] file for your application role:

If you can't find the file, to learn how to enable the Diagnostics extension in your Cloud Services project, see
Set up diagnostics for Azure Cloud Services and virtual machines.
3. Add the following SinksConfig section as a child element of WadCfg :
<WadCfg>
<DiagnosticMonitorConfiguration>...</DiagnosticMonitorConfiguration>
<SinksConfig>
<Sink name="MyApplicationInsightsProfiler">
<!-- Replace with your own Application Insights instrumentation key. -->
<ApplicationInsightsProfiler>00000000-0000-0000-0000-000000000000</ApplicationInsightsProfiler>
</Sink>
</SinksConfig>
</WadCfg>

NOTE
If the [Link] file also contains another sink of type ApplicationInsights , all three of these
instrumentation keys must match:
The instrumentation key used by your application.
The instrumentation key used by the ApplicationInsights sink.
The instrumentation key used by the ApplicationInsightsProfiler sink.
You can find the actual instrumentation key value used by the ApplicationInsights sink in the ServiceConfiguration.*.cscfg
files.
After the Visual Studio 15.5 Azure SDK release, only the instrumentation keys used by the application and
ApplicationInsightsProfiler sink need to match each other.

Environment deployment and runtime configurations


1. Deploy the modified environment deployment definition.
To apply the modifications, typically a full template deployment or a cloud services publish through
PowerShell cmdlets or Visual Studio is involved.
The following is an alternate approach for existing virtual machines that touches only its Azure Diagnostics
extension:

$ConfigFilePath = [[Link]]::GetTempFileName()
# After you export the currently deployed Diagnostics config to a file, edit it to include the
ApplicationInsightsProfiler sink.
(Get-AzureRmVMDiagnosticsExtension -ResourceGroupName "MyRG" -VMName "MyVM").PublicSettings | Out-File -
Verbose $ConfigFilePath
# Set-AzureRmVMDiagnosticsExtension might require the -StorageAccountName argument
# if your original diagnostics configuration had the storageAccountName property in the
protectedSettings section
# (which is not downloadable). Make sure to pass the same original value you had in this cmdlet call.
Set-AzureRmVMDiagnosticsExtension -ResourceGroupName "MyRG" -VMName "MyVM" -DiagnosticsConfigurationPath
$ConfigFilePath

2. If the intended application is running through IIS, enable the IIS Http Tracing Windows feature:
a. Establish remote access to the environment, and then use the Add Windows Features window, or run the
following command in PowerShell (as administrator):
powershell Enable-WindowsOptionalFeature -FeatureName IIS-HttpTracing -Online -All
b. If establishing remote access is a problem, you can use Azure CLI to run the following command:
powershell az vm run-command invoke -g MyResourceGroupName -n MyVirtualMachineName --command-id
RunPowerShellScript --scripts "Enable-WindowsOptionalFeature -FeatureName IIS-HttpTracing -Online -
All"

Enable the profiler on on-premises servers


Enabling the profiler on an on-premises server is also known as running Application Insights Profiler in standalone
mode (it's not tied to Azure Diagnostics extension modifications).
We have no plan to officially support the profiler for on-premises servers. If you are interested in experimenting
with this scenario, you can download support code. We are not responsible for maintaining that code, or for
responding to issues and feature requests related to the code.

Next steps
Generate traffic to your application (for example, launch an availability test). Then, wait 10 to 15 minutes for
traces to start to be sent to the Application Insights instance.
See Profiler traces in the Azure portal.
Get help with troubleshooting profiler issues in Profiler troubleshooting.
Read more about the profiler in Application Insights Profiler.
1 min to read •
Edit O nline
Preview upcoming changes to Azure Application
Insights
11/1/2017 • 1 min to read • Edit Online

Application Insights frequently releases new features. If you want to see previews of these improvements, you can
sign up on the Application Insights Preview blade. The development team makes previews of new features available
on a limited basis before releasing them to all users.
The following image illustrates how to set your preview preferences.

Set preferences
On the Preview blade, you can select from the following options for when you see previews.
Always: You see Preview experiences as soon as they are available.
Auto: You see Preview experiences that Microsoft recommends for your account.
Never: You will only see Preview experiences that you select.

Next steps
Create a resource
More telemetry from Application Insights
11/7/2017 • 1 min to read • Edit Online

After you have added Application Insights to your [Link] code, there are a few things you can do to get even
more telemetry.

ACTION WHAT YOU GET

(IIS servers) Install Status Monitor on each server machine. Performance counters
(Azure web apps) In the Azure control panel for the web app, Exceptions - detailed stack traces
open the Application Insights blade. Dependencies

Add the JavaScript snippet to your web pages Page performance, browser exceptions, AJAX performance.
Custom client-side telemetry.

Create availability web tests Get alerts if your site becomes unavailable

Ensure [Link] is generated by MSBuild Build annotations in metric charts

Write custom events and metrics Count business events and metrics, track detailed usage, and
more.

Profile your live site Detailed function timings from your live web app
Diagnose exceptions in your web apps with
Application Insights
1/3/2018 • 9 min to read • Edit Online

Exceptions in your live web app are reported by Application Insights. You can correlate failed requests with
exceptions and other events at both the client and server, so that you can quickly diagnose the causes.

Set up exception reporting


To have exceptions reported from your server app:
Install Application Insights SDK in your app code, or
IIS web servers: Run Application Insights Agent; or
Azure web apps: Add the Application Insights Extension
Java web apps: Install the Java agent
Install the JavaScript snippet in your web pages to catch browser exceptions.
In some application frameworks or with some settings, you need to take some extra steps to catch more
exceptions:
Web forms
MVC
Web API 1.*
Web API 2.*
WCF

Diagnosing exceptions using Visual Studio


Open the app solution in Visual Studio to help with debugging.
Run the app, either on your server or on your development machine by using F5.
Open the Application Insights Search window in Visual Studio, and set it to display events from your app. While
you're debugging, you can do this just by clicking the Application Insights button.
Notice that you can filter the report to show just exceptions.
No exceptions showing? See Capture exceptions.
Click an exception report to show its stack trace. Click a line reference in the stack trace, to open the relevant
code file.
In the code, notice that CodeLens shows data about the exceptions:

Diagnosing failures using the Azure portal


Application Insights comes with a curated APM experience to help you diagnose failures in your monitored
applications. To start, click on the Failures option in the Application Insights resource menu located in the
Investigate section. You should see a full-screen view that shows you the failure rate trends for your requests,
how many of them are failing, and how many users are impacted. On the right you'll see some of the most
useful distributions specific to the selected failing operation, including top 3 response codes, top 3 exception
types, and top 3 failing dependency types.
In a single click you can then review representative samples for each of these subsets of operations. In
particular, to diagnose exceptions, you can click on the count of a particular exception to be presented with an
Exceptions details blade, such as this one:
Alternatively, instead of looking at exceptions of a specific failing operation, you can start from the overall
view of exceptions, by switching to the Exceptions tab:
Here you can see all the exceptions collected for your monitored app.
No exceptions showing? See Capture exceptions.

Custom tracing and log data


To get diagnostic data specific to your app, you can insert code to send your own telemetry data. This displayed
in diagnostic search alongside the request, page view and other automatically-collected data.
You have several options:
TrackEvent() is typically used for monitoring usage patterns, but the data it sends also appears under
Custom Events in diagnostic search. Events are named, and can carry string properties and numeric metrics
on which you can filter your diagnostic searches.
TrackTrace() lets you send longer data such as POST information.
TrackException() sends stack traces. More about exceptions.
If you already use a logging framework like Log4Net or NLog, you can capture those logs and see them in
diagnostic search alongside request and exception data.
To see these events, open Search, open Filter, and then choose Custom Event, Trace, or Exception.
NOTE
If your app generates a lot of telemetry, the adaptive sampling module will automatically reduce the volume that is sent
to the portal by sending only a representative fraction of events. Events that are part of the same operation will be
selected or deselected as a group, so that you can navigate between related events. Learn about sampling.

How to see request POST data


Request details don't include the data sent to your app in a POST call. To have this data reported:
Install the SDK in your application project.
Insert code in your application to call [Link](). Send the POST data in the
message parameter. There is a limit to the permitted size, so you should try to send just the essential data.
When you investigate a failed request, find the associated traces.
Capturing exceptions and related diagnostic data
At first, you won't see in the portal all the exceptions that cause failures in your app. You'll see any browser
exceptions (if you're using the JavaScript SDK in your web pages). But most server exceptions are caught by IIS
and you have to write a bit of code to see them.
You can:
Log exceptions explicitly by inserting code in exception handlers to report the exceptions.
Capture exceptions automatically by configuring your [Link] framework. The necessary additions are
different for different types of framework.

Reporting exceptions explicitly


The simplest way is to insert a call to TrackException() in an exception handler.
JavaScript

try
{ ...
}
catch (ex)
{
[Link](ex, "handler loc",
{Game: [Link],
State: [Link]()});
}

C#

var telemetry = new TelemetryClient();


...
try
{ ...
}
catch (Exception ex)
{
// Set up some properties:
var properties = new Dictionary <string, string>
{{"Game", [Link]}};

var measurements = new Dictionary <string, double>


{{"Users", [Link]}};

// Send the exception telemetry:


[Link](ex, properties, measurements);
}

VB
Dim telemetry = New TelemetryClient
...
Try
...
Catch ex as Exception
' Set up some properties:
Dim properties = New Dictionary (Of String, String)
[Link]("Game", [Link])

Dim measurements = New Dictionary (Of String, Double)


[Link]("Users", [Link])

' Send the exception telemetry:


[Link](ex, properties, measurements)
End Try

The properties and measurements parameters are optional, but are useful for filtering and adding extra
information. For example, if you have an app that can run several games, you could find all the exception
reports related to a particular game. You can add as many items as you like to each dictionary.

Browser exceptions
Most browser exceptions are reported.
If your web page includes script files from content delivery networks or other domains, ensure your script tag
has the attribute crossorigin="anonymous" , and that the server sends CORS headers. This will allow you to get a
stack trace and detail for unhandled JavaScript exceptions from these resources.

Web forms
For web forms, the HTTP Module will be able to collect the exceptions when there are no redirects configured
with CustomErrors.
But if you have active redirects, add the following lines to the Application_Error function in [Link]. (Add
a [Link] file if you don't already have one.)
C#

void Application_Error(object sender, EventArgs e)


{
if ([Link] && [Link] () != null)
{
var ai = new TelemetryClient(); // or re-use an existing instance

[Link]([Link]());
}
}

MVC
If the CustomErrors configuration is Off , then exceptions will be available for the HTTP Module to collect.
However, if it is RemoteOnly (default), or On , then the exception will be cleared and not available for
Application Insights to automatically collect. You can fix that by overriding the
[Link] class, and applying the overridden class as shown for the different MVC
versions below (github source):
using System;
using [Link];
using [Link];

namespace [Link]
{
[AttributeUsage([Link] | [Link], Inherited = true, AllowMultiple =
true)]
public class AiHandleErrorAttribute : HandleErrorAttribute
{
public override void OnException(ExceptionContext filterContext)
{
if (filterContext != null && [Link] != null && [Link] != null)
{
//If customError is Off, then AI HTTPModule will report the exception
if ([Link])
{ //or reuse instance (recommended!). see note above
var ai = new TelemetryClient();
[Link]([Link]);
}
}
[Link](filterContext);
}
}
}

MVC 2
Replace the HandleError attribute with your new attribute in your controllers.

namespace [Link]
{
[AiHandleError]
public class HomeController : Controller
{
...

Sample
MVC 3
Register AiHandleErrorAttribute as a global filter in [Link]:

public class MyMvcApplication : [Link]


{
public static void RegisterGlobalFilters(GlobalFilterCollection filters)
{
[Link](new AiHandleErrorAttribute());
}
...

Sample
MVC 4, MVC5
Register AiHandleErrorAttribute as a global filter in [Link]:
public class FilterConfig
{
public static void RegisterGlobalFilters(GlobalFilterCollection filters)
{
// Default replaced with the override to track unhandled exceptions
[Link](new AiHandleErrorAttribute());
}
}

Sample

Web API 1.x


Override [Link]:

using [Link];
using [Link];

namespace WebAPI.App_Start
{
public class AiExceptionFilterAttribute : ExceptionFilterAttribute
{
public override void OnException(HttpActionExecutedContext actionExecutedContext)
{
if (actionExecutedContext != null && [Link] != null)
{ //or reuse instance (recommended!). see note above
var ai = new TelemetryClient();
[Link]([Link]);
}
[Link](actionExecutedContext);
}
}
}

You could add this overridden attribute to specific controllers, or add it to the global filter configuration in the
WebApiConfig class:

using [Link];
using WebApi1.x.App_Start;

namespace WebApi1.x
{
public static class WebApiConfig
{
public static void Register(HttpConfiguration config)
{
[Link](name: "DefaultApi", routeTemplate: "api/{controller}/{id}",
defaults: new { id = [Link] });
...
[Link]();

// Capture exceptions for Application Insights:


[Link](new AiExceptionFilterAttribute());
}
}
}

Sample
There are a number of cases that the exception filters cannot handle. For example:
Exceptions thrown from controller constructors.
Exceptions thrown from message handlers.
Exceptions thrown during routing.
Exceptions thrown during response content serialization.

Web API 2.x


Add an implementation of IExceptionLogger:

using [Link];
using [Link];

namespace ProductsAppPureWebAPI.App_Start
{
public class AiExceptionLogger : ExceptionLogger
{
public override void Log(ExceptionLoggerContext context)
{
if (context !=null && [Link] != null)
{//or reuse instance (recommended!). see note above
var ai = new TelemetryClient();
[Link]([Link]);
}
[Link](context);
}
}
}

Add this to the services in WebApiConfig:

using [Link];
using [Link];
using ProductsAppPureWebAPI.App_Start;

namespace WebApi2WithMVC
{
public static class WebApiConfig
{
public static void Register(HttpConfiguration config)
{
// Web API configuration and services

// Web API routes


[Link]();

[Link](
name: "DefaultApi",
routeTemplate: "api/{controller}/{id}",
defaults: new { id = [Link] }
);
[Link](typeof(IExceptionLogger), new AiExceptionLogger());
}
}

}
Sample
As alternatives, you could:
1. Replace the only ExceptionHandler with a custom implementation of IExceptionHandler. This is only called
when the framework is still able to choose which response message to send (not when the connection is
aborted for instance)
2. Exception Filters (as described in the section on Web API 1.x controllers above) - not called in all cases.

WCF
Add a class that extends Attribute and implements IErrorHandler and IServiceBehavior.

using System;
using [Link];
using [Link];
using [Link];
using [Link];
using [Link];
using [Link];

namespace [Link]
{
public class AiLogExceptionAttribute : Attribute, IErrorHandler, IServiceBehavior
{
public void AddBindingParameters(ServiceDescription serviceDescription,
[Link] serviceHostBase,
[Link]<ServiceEndpoint> endpoints,
[Link] bindingParameters)
{
}

public void ApplyDispatchBehavior(ServiceDescription serviceDescription,


[Link] serviceHostBase)
{
foreach (ChannelDispatcher disp in [Link])
{
[Link](this);
}
}

public void Validate(ServiceDescription serviceDescription,


[Link] serviceHostBase)
{
}

bool [Link](Exception error)


{//or reuse instance (recommended!). see note above
var ai = new TelemetryClient();

[Link](error);
return false;
}

void [Link](Exception error,


[Link] version,
ref [Link] fault)
{
}
}
}

Add the attribute to the service implementations:


namespace WcfService4
{
[AiLogException]
public class Service1 : IService1
{
...

Sample

Exception performance counters


If you have installed the Application Insights Agent on your server, you can get a chart of the exceptions rate,
measured by .NET. This includes both handled and unhandled .NET exceptions.
Open a Metric Explorer blade, add a new chart, and select Exception rate, listed under Performance Counters.
The .NET framework calculates the rate by counting the number of exceptions in an interval and dividing by the
length of the interval.
This is different from the 'Exceptions' count calculated by the Application Insights portal counting
TrackException reports. The sampling intervals are different, and the SDK doesn't send TrackException reports
for all handled and unhandled exceptions.

Video

Next steps
Monitor REST, SQL, and other calls to dependencies
Monitor page load times, browser exceptions, and AJAX calls
Monitor performance counters
Explore .NET trace logs in Application Insights
11/1/2017 • 5 min to read • Edit Online

If you use NLog, log4Net or [Link] for diagnostic tracing in your [Link] application, you can
have your logs sent to Azure Application Insights, where you can explore and search them. Your logs will be
merged with the other telemetry coming from your application, so that you can identify the traces associated
with servicing each user request, and correlate them with other events and exception reports.

NOTE
Do you need the log capture module? It's a useful adapter for 3rd-party loggers, but if you aren't already using NLog,
log4Net or [Link], consider just calling Application Insights TrackTrace() directly.

Install logging on your app


Install your chosen logging framework in your project. This should result in an entry in [Link] or [Link].
If you're using [Link], you need to add an entry to [Link]:

<configuration>
<[Link]>
<trace autoflush="false" indentsize="4">
<listeners>
<add name="myListener"
type="[Link]"
initializeData="[Link]" />
<remove name="Default" />
</listeners>
</trace>
</[Link]>
</configuration>

Configure Application Insights to collect logs


Add Application Insights to your project if you haven't done that yet. You'll see an option to include the log
collector.
Or Configure Application Insights by right-clicking your project in Solution Explorer. Select the option to
Configure trace collection.
No Application Insights menu or log collector option? Try Troubleshooting.

Manual installation
Use this method if your project type isn't supported by the Application Insights installer (for example a Windows
desktop project).
1. If you plan to use log4Net or NLog, install it in your project.
2. In Solution Explorer, right-click your project and choose Manage NuGet Packages.
3. Search for "Application Insights"
4. Select the appropriate package - one of:
[Link] (to capture [Link] calls)
[Link] (to capture EventSource events)
[Link] (to capture ETW events)
[Link]
[Link].Log4NetAppender
The NuGet package installs the necessary assemblies, and also modifies [Link] or [Link].

Insert diagnostic log calls


If you use [Link], a typical call would be:

[Link]("Slow response - database01");

If you prefer log4net or NLog:

[Link]("Slow response - database01");

Using EventSource events


You can configure [Link] events to be sent to Application Insights as traces.
First, install the [Link] NuGet package. Then edit TelemetryModules
section of the [Link] file.

<Add Type="[Link],
[Link]">
<Sources>
<Add Name="MyCompany" Level="Verbose" />
</Sources>
</Add>

For each source, you can set the following parameters:


Name specifies the name of the EventSource to collect.
Level specifies the logging level to collect. Can be one of Critical , Error , Informational , LogAlways ,
Verbose , Warning .
Keywords (Optional) specifies the integer value of keywords combinations to use.

Using DiagnosticSource events


You can configure [Link] events to be sent to Application Insights as traces. First,
install the [Link] NuGet package. Then edit the
TelemetryModules section of the [Link] file.

<Add Type="[Link],
[Link]">
<Sources>
<Add Name="MyDiagnosticSourceName" />
</Sources>
</Add>

For each DiagnosticSource you want to trace, add an entry with the Name attribute set to the name of your
DiagnosticSource.
Using ETW events
You can configure ETW events to be sent to Application Insights as traces. First, install the
[Link] NuGet package. Then edit TelemetryModules section of the
[Link] file.

NOTE
ETW events can only be collected if the process hosting the SDK is running under an identity that is a member of
"Performance Log Users" or Administrators.

<Add Type="[Link],
[Link]">
<Sources>
<Add ProviderName="MyCompanyEventSourceName" Level="Verbose" />
</Sources>
</Add>

For each source, you can set the following parameters:


ProviderName is the name of the ETW provider to collect.
ProviderGuid specifies the GUID of the ETW provider to collect, can be used instead of ProviderName .
Level sets the logging level to collect. Can be one of Critical , Error , Informational , LogAlways , Verbose ,
Warning .
Keywords (Optional) sets the integer value of keyword combinations to use.

Using the Trace API directly


You can call the Application Insights trace API directly. The logging adapters use this API.
For example:

var telemetry = new [Link]();


[Link]("Slow response - database01");

An advantage of TrackTrace is that you can put relatively long data in the message. For example, you could
encode POST data there.
In addition, you can add a severity level to your message. And, like other telemetry, you can add property values
that you can use to help filter or search for different sets of traces. For example:

var telemetry = new [Link]();


[Link]("Slow database response",
[Link],
new Dictionary<string,string> { {"database", [Link]} });

This would enable you, in Search, to easily filter out all the messages of a particular severity level relating to a
particular database.

Explore your logs


Run your app, either in debug mode or deploy it live.
In your app's overview blade in the Application Insights portal, choose Search.

You can, for example:


Filter on log traces, or on items with specific properties
Inspect a specific item in detail.
Find other telemetry relating to the same user request (that is, with the same OperationId)
Save the configuration of this page as a Favorite

NOTE
Sampling. If your application sends a lot of data and you are using the Application Insights SDK for [Link] version 2.0.0-
beta3 or later, the adaptive sampling feature may operate and send only a percentage of your telemetry. Learn more about
sampling.

Next steps
Diagnose failures and exceptions in [Link]
Learn more about Search.

Troubleshooting
How do I do this for Java?
Use the Java log adapters.
There's no Application Insights option on the project context menu
Check Application Insights tools is installed on this development machine. In Visual Studio menu Tools,
Extensions and Updates, look for Application Insights Tools. If it isn't in the Installed tab, open the Online tab
and install it.
This might be a type of project not supported by Application Insights tools. Use manual installation.
No log adapter option in the configuration tool
You need to install the logging framework first.
If you're using [Link], make sure you configured it in [Link] .
Have you got the latest version of Application Insights? In Visual Studio Tools menu, choose Extensions and
Updates, and open the Updates tab. If Developer Analytics tools is there, click to update it.
I get an error "Instrumentation key cannot be empty"
Looks like you installed the logging adapter Nuget package without installing Application Insights.
In Solution Explorer, right-click [Link] and choose Update Application Insights. You'll get
a dialog that invites you to sign in to Azure and either create an Application Insights resource, or re-use an
existing one. That should fix it.
I can see traces in diagnostic search, but not the other events
It can sometimes take a while for all the events and requests to get through the pipeline.
How much data is retained?
Several factors impact the amount of data retained. See the limits section of the customer event metrics page for
more information.
I'm not seeing some of the log entries that I expect
If your application sends a lot of data and you are using the Application Insights SDK for [Link] version 2.0.0-
beta3 or later, the adaptive sampling feature may operate and send only a percentage of your telemetry. Learn
more about sampling.

Next steps
Set up availability and responsiveness tests
Troubleshooting
System performance counters in Application Insights
11/13/2017 • 3 min to read • Edit Online

Windows provides a wide variety of performance counters such as CPU occupancy, memory, disk, and network
usage. You can also define your own. Application Insights can show these performance counters if your
application is running under IIS on an on-premises host or virtual machine to which you have administrative
access. The charts indicate the resources available to your live application, and can help to identify unbalanced
load between server instances.
Performance counters appear in the Servers blade, which includes a table that segments by server instance.

(Performance counters aren't available for Azure Web Apps. But you can send Azure Diagnostics to Application
Insights.)

View counters
The Servers blade shows a default set of performance counters.
To see other counters, either edit the charts on the Servers blade, or open a new Metrics Explorer blade and add
new charts.
The available counters are listed as metrics when you edit a chart.
To see all your most useful charts in one place, create a dashboard and pin them to it.

Add counters
If the performance counter you want isn't shown in the list of metrics, that's because the Application Insights SDK
isn't collecting it in your web server. You can configure it to do so.
1. Find out what counters are available in your server by using this PowerShell command at the server:
Get-Counter -ListSet *

(See Get-Counter .)
2. Open [Link].
If you added Application Insights to your app during development, edit [Link] in
your project, and then re-deploy it to your servers.
If you used Status Monitor to instrument a web app at runtime, find [Link] in the
root directory of the app in IIS. Update it there in each server instance.
3. Edit the performance collector directive:

<Add Type="[Link],
[Link]">
<Counters>
<Add PerformanceCounter="\Objects\Processes"/>
<Add PerformanceCounter="\Sales(photo)\# Items Sold" ReportAs="Photo sales"/>
</Counters>
</Add>

You can capture both standard counters and those you have implemented yourself. \Objects\Processes is an
example of a standard counter, available on all Windows systems. \Sales(photo)\# Items Sold is an example of a
custom counter that might be implemented in a web service.
The format is \Category(instance)\Counter" , or for categories that don't have instances, just \Category\Counter .
ReportAs is required for counter names that do not match [a-zA-Z()/-_ \.]+ - that is, they contain characters
that are not in the following sets: letters, round brackets, forward slash, hyphen, underscore, space, dot.
If you specify an instance, it will be collected as a dimension "CounterInstanceName" of the reported metric.
Collecting performance counters in code
To collect system performance counters and send them to Application Insights, you can adapt the snippet below:
var perfCollectorModule = new PerformanceCollectorModule();
[Link](new PerformanceCounterCollectionRequest(
@"\.NET CLR Memory([replace-with-application-process-name])\# GC Handles", "GC Handles")));
[Link]([Link]);

Or you can do the same thing with custom metrics you created:

var perfCollectorModule = new PerformanceCollectorModule();


[Link](new PerformanceCounterCollectionRequest(
@"\Sales(photo)\# Items Sold", "Photo sales"));
[Link]([Link]);

Performance counters in Analytics


You can search and display performance counter reports in Analytics.
The performanceCounters schema exposes the category , counter name, and instance name of each
performance counter. In the telemetry for each application, you’ll see only the counters for that application. For
example, to see what counters are available:

('Instance' here refers to the performance counter instance, not the role or server machine instance. The
performance counter instance name typically segments counters such as processor time by the name of the
process or application.)
To get a chart of available memory over the recent period:
Like other telemetry, performanceCounters also has a column cloud_RoleInstance that indicates the identity of
the host server instance on which your app is running. For example, to compare the performance of your app on
the different machines:

[Link] and Application Insights counts


What's the difference between the Exception rate and Exceptions metrics?
Exception rate is a system performance counter. The CLR counts all the handled and unhandled exceptions that
are thrown, and divides the total in a sampling interval by the length of the interval. The Application Insights
SDK collects this result and sends it to the portal.
Exceptions is a count of the TrackException reports received by the portal in the sampling interval of the chart. It
includes only the handled exceptions where you have written TrackException calls in your code, and doesn't
include all unhandled exceptions.

Performance counters in [Link] Core applications


Performance counters are supported only if the application is targeting the full .NET Framework. There is no ability
to collect Performance counters for .Net Core applications.

Alerts
Like other metrics, you can set an alert to warn you if a performance counter goes outside a limit you specify.
Open the Alerts blade and click Add Alert.

Next steps
Dependency tracking
Exception tracking
Set up Application Insights: Dependency tracking
11/1/2017 • 5 min to read • Edit Online

A dependency is an external component that is called by your app. It's typically a service called using HTTP, or a
database, or a file system. Application Insights measures how long your application waits for dependencies and
how often a dependency call fails. You can investigate specific calls, and relate them to requests and exceptions.

The out-of-the-box dependency monitor currently reports calls to these types of dependencies:
Server
SQL databases
[Link] web and WCF services that use HTTP-based bindings
Local or remote HTTP calls
Azure Cosmos DB, table, blob storage, and queue
Web pages
AJAX calls
Monitoring works by using byte code instrumentation around selected methods. Performance overhead is
minimal.
You can also write your own SDK calls to monitor other dependencies, both in the client and server code, using
the TrackDependency API.

Set up dependency monitoring


Partial dependency information is collected automatically by the Application Insights SDK. To get complete data,
install the appropriate agent for the host server.

PLATFORM INSTALL

IIS Server Either install Status Monitor on your server or Upgrade your
application to .NET framework 4.6 or later and install the
Application Insights SDK in your app.

Azure Web App In your web app control panel, open the Application Insights
blade in your web app control panel and choose Install if
prompted.
PLATFORM INSTALL

Azure Cloud Service Use startup task or Install .NET framework 4.6+

Where to find dependency data


Application Map visualizes dependencies between your app and neighbouring components.
Performance, browser, and failure blades show server dependency data.
Browsers blade shows AJAX calls from your users' browsers.
Click through from slow or failed requests to check their dependency calls.
Analytics can be used to query dependency data.

Application Map
Application Map acts as a visual aid to discovering dependencies between the components of your application. It
is automatically generated from the telemetry from your app. This example shows AJAX calls from the browser
scripts and REST calls from the server app to two external services.

Navigate from the boxes to relevant dependency and other charts.


Pin the map to the dashboard, where it will be fully functional.
Learn more.

Performance and failure blades


The performance blade shows the duration of dependency calls made by the server app. There's a summary
chart and a table segmented by call.
Click through the summary charts or the table items to search raw occurrences of these calls.

Failure counts are shown on the Failures blade. A failure is any return code that is not in the range 200-399, or
unknown.
NOTE
100% failures? - This probably indicates that you are only getting partial dependency data. You need to set up
dependency monitoring appropriate to your platform.

AJAX Calls
The Browsers blade shows the duration and failure rate of AJAX calls from JavaScript in your web pages. They
are shown as Dependencies.

Diagnose slow requests


Each request event is associated with the dependency calls, exceptions and other events that are tracked while
your app is processing the request. So if some requests are performing badly, you can find out whether it's due
to slow responses from a dependency.
Let's walk through an example of that.
Tracing from requests to dependencies
Open the Performance blade, and look at the grid of requests:

The top one is taking very long. Let's see if we can find out where the time is spent.
Click that row to see individual request events:
Click any long-running instance to inspect it further, and scroll down to the remote dependency calls related to
this request:

It looks like most of the time servicing this request was spent in a call to a local service.
Select that row to get more information:
Looks like this is where the problem is. We've pinpointed the problem, so now we just need to find out why that
call is taking so long.
Request timeline
In a different case, there is no dependency call that is particularly long. But by switching to the timeline view, we
can see where the delay occurred in our internal processing:

There seems to be a big gap after the first dependency call, so we should look at our code to see why that is.
Profile your live site
No idea where the time goes? The Application Insights profiler traces HTTP calls to your live site and shows you
which functions in your code took the longest time.

Failed requests
Failed requests might also be associated with failed calls to dependencies. Again, we can click through to track
down the problem.
Click through to an occurrence of a failed request, and look at its associated events.

Analytics
You can track dependencies in the Log Analytics query language. Here are some examples.
Find any failed dependency calls:

dependencies | where success != "True" | take 10

Find AJAX calls:

dependencies | where client_Type == "Browser" | take 10

Find dependency calls associated with requests:


dependencies
| where timestamp > ago(1d) and client_Type != "Browser"
| join (requests | where timestamp > ago(1d))
on operation_Id

Find AJAX calls associated with page views:

dependencies
| where timestamp > ago(1d) and client_Type == "Browser"
| join (browserTimings | where timestamp > ago(1d))
on operation_Id

Custom dependency tracking


The standard dependency-tracking module automatically discovers external dependencies such as databases
and REST APIs. But you might want some additional components to be treated in the same way.
You can write code that sends dependency information, using the same TrackDependency API that is used by the
standard modules.
For example, if you build your code with an assembly that you didn't write yourself, you could time all the calls
to it, to find out what contribution it makes to your response times. To have this data displayed in the
dependency charts in Application Insights, send it using TrackDependency .

var startTime = [Link];


var timer = [Link]();
try
{
success = [Link]();
}
finally
{
[Link]();
[Link]("myDependency", "myCall", startTime, [Link], success);
}

If you want to switch off the standard dependency tracking module, remove the reference to
DependencyTrackingTelemetryModule in [Link].

Troubleshooting
Dependency success flag always shows either true or false.
SQL query not shown in full.
Upgrade to the latest version of the SDK. If your .NET version is less than 4.6:
IIS host: Install Application Insights Agent on the host servers.
Azure web app: Open Application Insights tab in the web app control panel, and install Application
Insights.

Video
Next steps
Exceptions
User & page data
Availability
Annotations on metric charts in Application Insights
11/1/2017 • 2 min to read • Edit Online

Annotations on Metrics Explorer charts show where you deployed a new build, or other significant event. They
make it easy to see whether your changes had any effect on your application's performance. They can be
automatically created by the Visual Studio Team Services build system. You can also create annotations to flag any
event you like by creating them from PowerShell.

Release annotations with VSTS build


Release annotations are a feature of the cloud-based build and release service of Visual Studio Team Services.
Install the Annotations extension (one time )
To be able to create release annotations, you'll need to install one of the many Team Service extensions available in
the Visual Studio Marketplace.
1. Sign in to your Visual Studio Team Services project.
2. In Visual Studio Marketplace, get the Release Annotations extension, and add it to your Team Services account.
You only need to do this once for your Visual Studio Team Services account. Release annotations can now be
configured for any project in your account.
Configure release annotations
You need to get a separate API key for each VSTS release template.
1. Sign in to the Microsoft Azure Portal and open the Application Insights resource that monitors your application.
(Or create one now, if you haven't done so yet.)
2. Open API Access, Application Insights Id.

3. In a separate browser window, open (or create) the release template that manages your deployments from
Visual Studio Team Services.
Add a task, and select the Application Insights Release Annotation task from the menu.
Paste the Application Id that you copied from the API Access blade.
4. Set the APIKey field to a variable $(ApiKey) .
5. Back in the Azure window, create a new API Key and take a copy of it.

6. Open the Configuration tab of the release template.


Create a variable definition for ApiKey .
Paste your API key to the ApiKey variable definition.
7. Finally, Save the release definition.

View annotations
Now, whenever you use the release template to deploy a new release, an annotation will be sent to Application
Insights. The annotations will appear on charts in Metrics Explorer.
Click on any annotation marker to open details about the release, including requestor, source control branch,
release definition, environment, and more.

Create custom annotations from PowerShell


You can also create annotations from any process you like (without using VS Team System).
1. Make a local copy of the Powershell script from GitHub.
2. Get the Application ID and create an API key from the API Access blade.
3. Call the script like this:
.\CreateReleaseAnnotation.ps1 `
-applicationId "<applicationId>" `
-apiKey "<apiKey>" `
-releaseName "<myReleaseName>" `
-releaseProperties @{
"ReleaseDescription"="a description";
"TriggerBy"="My Name" }

It's easy to modify the script, for example to create annotations for the past.

Next steps
Create work items
Automation with PowerShell
Configuring the Application Insights SDK with
[Link] or .xml
1/3/2018 • 8 min to read • Edit Online

The Application Insights .NET SDK consists of a number of NuGet packages. The core package provides the API
for sending telemetry to the Application Insights. Additional packages provide telemetry modules and
initializers for automatically tracking telemetry from your application and its context. By adjusting the
configuration file, you can enable or disable telemetry modules and initializers, and set parameters for some of
them.
The configuration file is named [Link] or [Link] , depending on the type
of your application. It is automatically added to your project when you install most versions of the SDK. It is also
added to a web app by Status Monitor on an IIS server, or when you select the Application Insights extension for
an Azure website or VM.
There isn't an equivalent file to control the SDK in a web page.
This document describes the sections you see in the configuration file, how they control the components of the
SDK, and which NuGet packages load those components.

Telemetry Modules ([Link])


Each telemetry module collects a specific type of data and uses the core API to send the data. The modules are
installed by different NuGet packages, which also add the required lines to the .config file.
There's a node in the configuration file for each module. To disable a module, delete the node or comment it
out.
Dependency Tracking
Dependency tracking collects telemetry about calls your app makes to databases and external services and
databases. To allow this module to work in an IIS server, you need to install Status Monitor. To use it in Azure
web apps or VMs, select the Application Insights extension.
You can also write your own dependency tracking code using the TrackDependency API.
[Link]
[Link] NuGet package.
Performance collector
Collects system performance counters such as CPU, memory and network load from IIS installations. You can
specify which counters to collect, including performance counters you have set up yourself.
[Link]
[Link] NuGet package.
Application Insights Diagnostics Telemetry
The DiagnosticsTelemetryModule reports errors in the Application Insights instrumentation code itself. For
example, if the code cannot access performance counters or if an ITelemetryInitializer throws an exception.
Trace telemetry tracked by this module appears in the Diagnostic Search. Sends diagnostic data to
[Link].
[Link]
[Link] NuGet package. If you only install this package, the [Link]
file is not automatically created.
Developer Mode
DeveloperModeWithDebuggerAttachedTelemetryModule forces the Application Insights TelemetryChannel to send
data immediately, one telemetry item at a time, when a debugger is attached to the application process. This
reduces the amount of time between the moment when your application tracks telemetry and when it appears
on the Application Insights portal. It causes significant overhead in CPU and network bandwidth.
[Link]
Application Insights Windows Server NuGet package
Web Request Tracking
Reports the response time and result code of HTTP requests.
[Link]
[Link] NuGet package
Exception tracking
ExceptionTrackingTelemetryModule tracks unhandled exceptions in your web app. See Failures and exceptions.
[Link]
[Link] NuGet package
[Link] - tracks unobserved task
exceptions.
[Link] - tracks unhandled
exceptions for worker roles, windows services, and console applications.
Application Insights Windows Server NuGet package.
EventSource Tracking
EventSourceTelemetryModule allows you to configure EventSource events to be sent to Application Insights as
traces. For information on tracking EventSource events, see Using EventSource Events.
[Link]
[Link]
ETW Event Tracking
EtwCollectorTelemetryModule allows you to configure events from ETW providers to be sent to Application
Insights as traces. For information on tracking ETW events, see Using ETW Events.
[Link]
[Link]
[Link]
The [Link] package provides the core API of the SDK. The other telemetry modules use
this, and you can also use it to define your own telemetry.
No entry in [Link].
[Link] NuGet package. If you just install this NuGet, no .config file is generated.

Telemetry Channel
The telemetry channel manages buffering and transmission of telemetry to the Application Insights service.
[Link] is the default channel
for services. It buffers data in memory.
[Link] is an alternative for console applications. It can save any
unflushed data to persistent storage when your app closes down, and will send it when the app starts again.

Telemetry Initializers ([Link])


Telemetry initializers set context properties that are sent along with every item of telemetry.
You can write your own initializers to set context properties.
The standard initializers are all set either by the Web or WindowsServer NuGet packages:
AccountIdTelemetryInitializer sets the AccountId property.
AuthenticatedUserIdTelemetryInitializer sets the AuthenticatedUserId property as set by the JavaScript
SDK.
AzureRoleEnvironmentTelemetryInitializer updates the RoleName and RoleInstance properties of the
Device context for all telemetry items with information extracted from the Azure runtime environment.
BuildInfoConfigComponentVersionTelemetryInitializer updates the Version property of the Component
context for all telemetry items with the value extracted from the [Link] file produced by MS
Build.
ClientIpHeaderTelemetryInitializer updates Ip property of the Location context of all telemetry items
based on the X-Forwarded-For HTTP header of the request.
DeviceTelemetryInitializer updates the following properties of the Device context for all telemetry items.
Type is set to "PC"
Id is set to the domain name of the computer where the web application is running.
OemName is set to the value extracted from the Win32_ComputerSystem.Manufacturer field using WMI.
Model is set to the value extracted from the Win32_ComputerSystem.Model field using WMI.
NetworkType is set to the value extracted from the NetworkInterface .
Language is set to the name of the CurrentCulture .
DomainNameRoleInstanceTelemetryInitializer updates the RoleInstance property of the Device context for
all telemetry items with the domain name of the computer where the web application is running.
OperationNameTelemetryInitializer updates the Name property of the RequestTelemetry and the Name
property of the Operation context of all telemetry items based on the HTTP method, as well as names of
[Link] MVC controller and action invoked to process the request.
OperationIdTelemetryInitializer or OperationCorrelationTelemetryInitializer updates the [Link]
context property of all telemetry items tracked while handling a request with the automatically generated
[Link] .
SessionTelemetryInitializer updates the Id property of the Session context for all telemetry items with
value extracted from the ai_session cookie generated by the ApplicationInsights JavaScript instrumentation
code running in the user's browser.
SyntheticTelemetryInitializer or SyntheticUserAgentTelemetryInitializer updates the User , Session
and Operation contexts properties of all telemetry items tracked when handling a request from a
synthetic source, such as an availability test or search engine bot. By default, Metrics Explorer does not
display synthetic telemetry.
The <Filters> set identifying properties of the requests.
UserTelemetryInitializer updates the Id and AcquisitionDate properties of User context for all
telemetry items with values extracted from the ai_user cookie generated by the Application Insights
JavaScript instrumentation code running in the user's browser.
WebTestTelemetryInitializer sets the user id, session id and synthetic source properties for HTTP requests
that come from availability tests. The <Filters> set identifying properties of the requests.

For .NET applications running in Service Fabric, you can include the
[Link] NuGet package. This package includes a
FabricTelemetryInitializer , which adds Service Fabric properties to telemetry items. For more information, see
the GitHub page about the properties added by this NuGet package.

Telemetry Processors ([Link])


Telemetry processors can filter and modify each telemetry item just before it is sent from the SDK to the portal.
You can write your own telemetry processors.
Adaptive sampling telemetry processor (from 2.0.0-beta3)
This is enabled by default. If your app sends a lot of telemetry, this processor removes some of it.

<TelemetryProcessors>
<Add
Type="[Link],
[Link]">
<MaxTelemetryItemsPerSecond>5</MaxTelemetryItemsPerSecond>
</Add>
</TelemetryProcessors>

The parameter provides the target that the algorithm tries to achieve. Each instance of the SDK works
independently, so if your server is a cluster of several machines, the actual volume of telemetry will be
multiplied accordingly.
Learn more about sampling.
Fixed-rate sampling telemetry processor (from 2.0.0-beta1)
There is also a standard sampling telemetry processor (from 2.0.1):

<TelemetryProcessors>
<Add Type="[Link],
[Link]">

<!-- Set a percentage close to 100/N where N is an integer. -->


<!-- E.g. 50 (=100/2), 33.33 (=100/3), 25 (=100/4), 20, 1 (=100/100), 0.1 (=100/1000) -->
<SamplingPercentage>10</SamplingPercentage>
</Add>
</TelemetryProcessors>

Channel parameters (Java)


These parameters affect how the Java SDK should store and flush the telemetry data that it collects.
MaxTelemetryBufferCapacity
The number of telemetry items that can be stored in the SDK's in-memory storage. When this number is
reached, the telemetry buffer is flushed - that is, the telemetry items are sent to the Application Insights server.
Min: 1
Max: 1000
Default: 500
<ApplicationInsights>
...
<Channel>
<MaxTelemetryBufferCapacity>100</MaxTelemetryBufferCapacity>
</Channel>
...
</ApplicationInsights>

FlushIntervalInSeconds
Determines how often the data that is stored in the in-memory storage should be flushed (sent to Application
Insights).
Min: 1
Max: 300
Default: 5

<ApplicationInsights>
...
<Channel>
<FlushIntervalInSeconds>100</FlushIntervalInSeconds>
</Channel>
...
</ApplicationInsights>

MaxTransmissionStorageCapacityInMB
Determines the maximum size in MB that is allotted to the persistent storage on the local disk. This storage is
used for persisting telemetry items that failed to be transmitted to the Application Insights endpoint. When the
storage size has been met, new telemetry items will be discarded.
Min: 1
Max: 100
Default: 10

<ApplicationInsights>
...
<Channel>
<MaxTransmissionStorageCapacityInMB>50</MaxTransmissionStorageCapacityInMB>
</Channel>
...
</ApplicationInsights>

InstrumentationKey
This determines the Application Insights resource in which your data appears. Typically you create a separate
resource, with a separate key, for each of your applications.
If you want to set the key dynamically - for example if you want to send results from your application to
different resources - you can omit the key from the configuration file, and set it in code instead.
To set the key for all instances of TelemetryClient, including standard telemetry modules, set the key in
[Link]. Do this in an initialization method, such as [Link] in an [Link] service:
protected void Application_Start()
{
[Link].
[Link] =
// - for example -
[Link]["ikey"];
//...

If you just want to send a specific set of events to a different resource, you can set the key for a specific
TelemetryClient:

var tc = new TelemetryClient();


[Link] = "----- my key ----";
[Link]("myEvent");
// ...

To get a new key, create a new resource in the Application Insights portal.

Next steps
Learn more about the API.
Debug snapshots on exceptions in .NET apps
1/9/2018 • 11 min to read • Edit Online

When an exception occurs, you can automatically collect a debug snapshot from your live web application. The
snapshot shows the state of source code and variables at the moment the exception was thrown. The Snapshot
Debugger (preview) in Azure Application Insights monitors exception telemetry from your web app. It collects
snapshots on your top-throwing exceptions so that you have the information you need to diagnose issues in
production. Include the Snapshot collector NuGet package in your application, and optionally configure collection
parameters in [Link]. Snapshots appear on exceptions in the Application Insights portal.
You can view debug snapshots in the portal to see the call stack and inspect variables at each call stack frame. To
get a more powerful debugging experience with source code, open snapshots with Visual Studio 2017 Enterprise
by downloading the Snapshot Debugger extension for Visual Studio. In Visual Studio you can also set Snappoints
to interactively take snapshots without waiting for an exception.
Snapshot collection is available for:
.NET Framework and [Link] applications running .NET Framework 4.5 or later.
.NET Core 2.0 and [Link] Core 2.0 applications running on Windows.
The following environments are supported:
Azure App Service.
Azure Cloud Service running OS family 4 or later.
Azure Service Fabric services running on Windows Server 2012 R2 or later.
Azure Virtual Machines running Windows Server 2012 R2 or later.
On-premise virtual or physical machines running Windows Server 2012 R2 or later.

NOTE
Client applications (for example, WPF, Windows Forms or UWP) are not supported.

Configure snapshot collection for [Link] applications


1. Enable Application Insights in your web app, if you haven't done it yet.
2. Include the [Link] NuGet package in your app.
3. Review the default options that the package added to [Link]:
<TelemetryProcessors>
<Add Type="[Link],
[Link]">
<!-- The default is true, but you can disable Snapshot Debugging by setting it to false -->
<IsEnabled>true</IsEnabled>
<!-- Snapshot Debugging is usually disabled in developer mode, but you can enable it by setting
this to true. -->
<!-- DeveloperMode is a property on the active TelemetryChannel. -->
<IsEnabledInDeveloperMode>false</IsEnabledInDeveloperMode>
<!-- How many times we need to see an exception before we ask for snapshots. -->
<ThresholdForSnapshotting>5</ThresholdForSnapshotting>
<!-- The maximum number of examples we create for a single problem. -->
<MaximumSnapshotsRequired>3</MaximumSnapshotsRequired>
<!-- The maximum number of problems that we can be tracking at any time. -->
<MaximumCollectionPlanSize>50</MaximumCollectionPlanSize>
<!-- How often to reset problem counters. -->
<ProblemCounterResetInterval>[Link]</ProblemCounterResetInterval>
<!-- The maximum number of snapshots allowed per day. -->
<SnapshotsPerDayLimit>50</SnapshotsPerDayLimit>
</Add>
</TelemetryProcessors>

4. Snapshots are collected only on exceptions that are reported to Application Insights. In some cases (for
example, older versions of the .NET platform), you might need to configure exception collection to see
exceptions with snapshots in the portal.
Configure snapshot collection for [Link] Core 2.0 applications
1. Enable Application Insights in your [Link] Core web app, if you haven't done it yet.

NOTE
Be sure that your application references version 2.1.1, or newer, of the [Link]
package.

2. Include the [Link] NuGet package in your app.


3. Modify your application's Startup class to add and configure the Snapshot Collector's telemetry processor.
using [Link];
using [Link];
...
class Startup
{
private class SnapshotCollectorTelemetryProcessorFactory : ITelemetryProcessorFactory
{
private readonly IServiceProvider _serviceProvider;

public SnapshotCollectorTelemetryProcessorFactory(IServiceProvider serviceProvider) =>


_serviceProvider = serviceProvider;

public ITelemetryProcessor Create(ITelemetryProcessor next)


{
var snapshotConfigurationOptions =
_serviceProvider.GetService<IOptions<SnapshotCollectorConfiguration>>();
return new SnapshotCollectorTelemetryProcessor(next, configuration:
[Link]);
}
}

public Startup(IConfiguration configuration) => Configuration = configuration;

public IConfiguration Configuration { get; }

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
// Configure SnapshotCollector from application settings
[Link]<SnapshotCollectorConfiguration>
([Link](nameof(SnapshotCollectorConfiguration)));

// Add SnapshotCollector telemetry processor.


[Link]<ITelemetryProcessorFactory>(sp => new
SnapshotCollectorTelemetryProcessorFactory(sp));

// TODO: Add other services your application needs here.


}
}

4. Configure the Snapshot Collector by adding a SnapshotCollectorConfiguration section to [Link].


For example:

{
"ApplicationInsights": {
"InstrumentationKey": "<your instrumentation key>"
},
"SnapshotCollectorConfiguration": {
"IsEnabledInDeveloperMode": true
}
}

Configure snapshot collection for other .NET applications


1. If your application is not already instrumented with Application Insights, get started by enabling Application
Insights and setting the instrumentation key.
2. Add the [Link] NuGet package in your app.
3. Snapshots are collected only on exceptions that are reported to Application Insights. You may need to
modify your code to report them. The exception handling code depends on the structure of your
application, but an example is below:
TelemetryClient _telemetryClient = new TelemetryClient();

void ExampleRequest()
{
try
{
// TODO: Handle the request.
}
catch (Exception ex)
{
// Report the exception to Application Insights.
_telemetryClient.TrackException(ex);

// TODO: Rethrow the exception if desired.


}
}

Grant permissions
Owners of the Azure subscription can inspect snapshots. Other users must be granted permission by an owner.
To grant permission, assign the Application Insights Snapshot Debugger role to users who will inspect snapshots.
This role can be assigned to individual users or groups by subscription owners for the target Application Insights
resource or its resource group or subscription.
1. Open the Access Control (IAM) blade.
2. Click the +Add button.
3. Select Application Insights Snapshot Debugger from the Roles drop-down list.
4. Search for and enter a name for the user to add.
5. Click the Save button to add the user to the role.

IMPORTANT
Snapshots can potentially contain personal and other sensitive information in variable and parameter values.

Debug snapshots in the Application Insights portal


If a snapshot is available for a given exception or a problem ID, an Open Debug Snapshot button appears on the
exception in the Application Insights portal.
In the Debug Snapshot view, you see a call stack and a variables pane. When you select frames of the call stack in
the call stack pane, you can view local variables and parameters for that function call in the variables pane.

Snapshots might contain sensitive information, and by default they are not viewable. To view snapshots, you must
have the Application Insights Snapshot Debugger role assigned to you.

Debug snapshots with Visual Studio 2017 Enterprise


1. Click the Download Snapshot button to download a .diagsession file, which can be opened by Visual
Studio 2017 Enterprise.
2. To open the .diagsession file, you must first download and install the Snapshot Debugger extension for
Visual Studio.
3. After you open the snapshot file, the Minidump Debugging page in Visual Studio appears. Click Debug
Managed Code to start debugging the snapshot. The snapshot opens to the line of code where the
exception was thrown so that you can debug the current state of the process.

The downloaded snapshot contains any symbol files that were found on your web application server. These
symbol files are required to associate snapshot data with source code. For App Service apps, make sure to enable
symbol deployment when you publish your web apps.

How snapshots work


When your application starts, a separate snapshot uploader process is created that monitors your application for
snapshot requests. When a snapshot is requested, a shadow copy of the running process is made in about 10 to
20 milliseconds. The shadow process is then analyzed, and a snapshot is created while the main process continues
to run and serve traffic to users. The snapshot is then uploaded to Application Insights along with any relevant
symbol (.pdb) files that are needed to view the snapshot.

Current limitations
Publish symbols
The Snapshot Debugger requires symbol files on the production server to decode variables and to provide a
debugging experience in Visual Studio. The 15.2 release of Visual Studio 2017 publishes symbols for release
builds by default when it publishes to App Service. In prior versions, you need to add the following line to your
publish profile .pubxml file so that symbols are published in release mode:

<ExcludeGeneratedDebugSymbol>False</ExcludeGeneratedDebugSymbol>
For Azure Compute and other types, ensure that the symbol files are in the same folder of the main application .dll
(typically, wwwroot/bin ) or are available on the current path.
Optimized builds
In some cases, local variables cannot be viewed in release builds because of optimizations that are applied during
the build process.

Troubleshooting
These tips help you troubleshoot problems with the Snapshot Debugger.
Verify the instrumentation key
Make sure that you're using the correct instrumentation key in your published application. Usually, Application
Insights reads the instrumentation key from the [Link] file. Verify that the value is the same as
the instrumentation key for the Application Insights resource that you see in the portal.
Check the uploader logs
After a snapshot is created, a minidump file (.dmp) is created on disk. A separate uploader process takes that
minidump file and uploads it, along with any associated PDBs, to Application Insights Snapshot Debugger storage.
After the minidump has uploaded successfully, it is deleted from disk. The log files for the minidump uploader are
retained on disk. In an App Service environment, you can find these logs in D:\Home\LogFiles\Uploader_*.log . Use
the Kudu management site for App Service to find these log files.
1. Open your App Service application in the Azure portal.
2. Select the Advanced Tools blade, or search for Kudu.
3. Click Go.
4. In the Debug console drop-down list box, select CMD.
5. Click LogFiles.
You should see at least one file with a name that begins with Uploader_ and a .log extension. Click the
appropriate icon to download any log files or open them in a browser. The file name includes the machine name. If
your App Service instance is hosted on more than one machine, there are separate log files for each machine.
When the uploader detects a new minidump file, it is recorded in the log file. Here's an example of a successful
upload:

[Link] Information: 0 : Dump available [Link]


DateTime=2017-05-25T[Link].0349846Z
[Link] Information: 0 : Uploading
D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\[Link], 329.12 MB
DateTime=2017-05-25T[Link].0145444Z
[Link] Information: 0 : Upload successful.
DateTime=2017-05-25T[Link].9164120Z
[Link] Information: 0 : Extracting PDB info from
D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\[Link].
DateTime=2017-05-25T[Link].9164120Z
[Link] Information: 0 : Matched 2 PDB(s) with local files.
DateTime=2017-05-25T[Link].2310982Z
[Link] Information: 0 : Stamp does not want any of our matched PDBs.
DateTime=2017-05-25T[Link].5435948Z
[Link] Information: 0 : Deleted
D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\[Link]
DateTime=2017-05-25T[Link].6095821Z

In the previous example, the instrumentation key is c12a605e73c44346a984e00000000000 . This value should match
the instrumentation key for your application. The minidump is associated with a snapshot with the ID
139e411a23934dc0b9ea08a626db16c5 . You can use this ID later to locate the associated exception telemetry in
Application Insights Analytics.
The uploader scans for new PDBs about once every 15 minutes. Here's an example:

[Link] Information: 0 : PDB rescan requested.


DateTime=2017-05-25T[Link].8003886Z
[Link] Information: 0 : Scanning D:\home\site\wwwroot\ for local PDBs.
DateTime=2017-05-25T[Link].8003886Z
[Link] Information: 0 : Scanning D:\local\Temporary [Link]
Files\root\a6554c94\e3ad6f22\assembly\dl3\81d5008b\00b93cc8_dec5d201 for local PDBs.
DateTime=2017-05-25T[Link].8160276Z
[Link] Information: 0 : Local PDB scan complete. Found 2 PDB(s).
DateTime=2017-05-25T[Link].8316450Z
[Link] Information: 0 : Deleted PDB scan marker
D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\.pdbscan.
DateTime=2017-05-25T[Link].8316450Z

For applications that are not hosted in App Service, the uploader logs are in the same folder as the minidumps:
%TEMP%\Dumps\<ikey> (where <ikey> is your instrumentation key).

Troubleshooting Cloud Services


For roles in Cloud Services, the default temporary folder may be too small to hold the minidump files, leading to
lost snapshots. The space needed depends on the total working set of your application and the number of
concurrent snapshots. The working set of a 32-bit [Link] web role is typically between 200 MB and 500 MB. You
should allow for at least two concurrent snapshots. For example, if your application uses 1 GB of total working set,
you should ensure that there is at least 2 GB of disk space to store snapshots. Follow these steps to configure your
Cloud Service role with a dedicated local resource for snapshots.
1. Add a new local resource to your Cloud Service by editing the Cloud Service definition (.csdf) file. The
following example defines a resource called SnapshotStore with a size of 5 GB.

<LocalResources>
<LocalStorage name="SnapshotStore" cleanOnRoleRecycle="false" sizeInMB="5120" />
</LocalResources>

2. Modify your role's OnStart method to add an environment variable that points to the SnapshotStore local
resource.

public override bool OnStart()


{
[Link]("SNAPSHOTSTORE",
[Link]("SnapshotStore").RootPath);
return [Link]();
}

3. Update your role's [Link] file to override the temporary folder location used by
SnapshotCollector

<TelemetryProcessors>
<Add Type="[Link],
[Link]">
<!-- Use the SnapshotStore local resource for snapshots -->
<TempFolder>%SNAPSHOTSTORE%</TempFolder>
<!-- Other SnapshotCollector configuration options -->
</Add>
</TelemetryProcessors>
Use Application Insights search to find exceptions with snapshots
When a snapshot is created, the throwing exception is tagged with a snapshot ID. When the exception telemetry is
reported to Application Insights, that snapshot ID is included as a custom property. Using the Search blade in
Application Insights, you can find all telemetry with the [Link] custom property.
1. Browse to your Application Insights resource in the Azure portal.
2. Click Search.
3. Type [Link] in the Search text box and press Enter.

If this search returns no results, then no snapshots were reported to Application Insights for your application in the
selected time range.
To search for a specific snapshot ID from the Uploader logs, type that ID in the Search box. If you can't find
telemetry for a snapshot that you know was uploaded, follow these steps:
1. Double-check that you're looking at the right Application Insights resource by verifying the instrumentation
key.
2. Using the timestamp from the Uploader log, adjust the Time Range filter of the search to cover that time
range.
If you still don't see an exception with that snapshot ID, then the exception telemetry wasn't reported to
Application Insights. This situation can happen if your application crashed after it took the snapshot but before it
reported the exception telemetry. In this case, check the App Service logs under Diagnose and solve problems to
see if there were unexpected restarts or unhandled exceptions.
Next steps
Set snappoints in your code to get snapshots without waiting for an exception.
Diagnose exceptions in your web apps explains how to make more exceptions visible to Application Insights.
Smart Detection automatically discovers performance anomalies.
Explore Java trace logs in Application Insights
11/1/2017 • 1 min to read • Edit Online

If you're using Logback or Log4J (v1.2 or v2.0) for tracing, you can have your trace logs sent automatically to
Application Insights where you can explore and search on them.

Install the Java SDK


Install Application Insights SDK for Java, if you haven't already done that.
(If you don't want to track HTTP requests, you can omit most of the .xml configuration file, but you must at least
include the InstrumentationKey element. You should also call new TelemetryClient() to initialize the SDK.)

Add logging libraries to your project


Choose the appropriate way for your project.
If you're using Maven...
If your project is already set up to use Maven for build, merge one of the following snippets of code into your
[Link] file.
Then refresh the project dependencies, to get the binaries downloaded.
Logback

<dependencies>
<dependency>
<groupId>[Link]</groupId>
<artifactId>applicationinsights-logging-logback</artifactId>
<version>[1.0,)</version>
</dependency>
</dependencies>

Log4J v2.0

<dependencies>
<dependency>
<groupId>[Link]</groupId>
<artifactId>applicationinsights-logging-log4j2</artifactId>
<version>[1.0,)</version>
</dependency>
</dependencies>

Log4J v1.2
<dependencies>
<dependency>
<groupId>[Link]</groupId>
<artifactId>applicationinsights-logging-log4j1_2</artifactId>
<version>[1.0,)</version>
</dependency>
</dependencies>

If you're using Gradle...


If your project is already set up to use Gradle for build, add one of the following lines to the dependencies group
in your [Link] file:
Then refresh the project dependencies, to get the binaries downloaded.
Logback

compile group: '[Link]', name: 'applicationinsights-logging-logback', version: '1.0.+'

Log4J v2.0

compile group: '[Link]', name: 'applicationinsights-logging-log4j2', version: '1.0.+'

Log4J v1.2

compile group: '[Link]', name: 'applicationinsights-logging-log4j1_2', version: '1.0.+'

Otherwise ...
Download and extract the appropriate appender, then add the appropriate library to your project:

LOGGER DOWNLOAD LIBRARY

Logback SDK with Logback appender applicationinsights-logging-logback

Log4J v2.0 SDK with Log4J v2 appender applicationinsights-logging-log4j2

Log4j v1.2 SDK with Log4J v1.2 appender applicationinsights-logging-log4j1_2

Add the appender to your logging framework


To start getting traces, merge the relevant snippet of code to the Log4J or Logback configuration file:
Logback

<appender name="aiAppender"
class="[Link]">
</appender>
<root level="trace">
<appender-ref ref="aiAppender" />
</root>

Log4J v2.0
<Configuration packages="[Link].Log4j">
<Appenders>
<ApplicationInsightsAppender name="aiAppender" />
</Appenders>
<Loggers>
<Root level="trace">
<AppenderRef ref="aiAppender"/>
</Root>
</Loggers>
</Configuration>

Log4J v1.2

<appender name="aiAppender"
class="[Link].log4j.v1_2.ApplicationInsightsAppender">
</appender>
<root>
<priority value ="trace" />
<appender-ref ref="aiAppender" />
</root>

The Application Insights appenders can be referenced by any configured logger, and not necessarily by the root
logger (as shown in the code samples above).

Explore your traces in the Application Insights portal


Now that you've configured your project to send traces to Application Insights, you can view and search these
traces in the Application Insights portal, in the Search blade.

Next steps
Diagnostic search
collectd: Linux performance metrics in Application
Insights
11/1/2017 • 2 min to read • Edit Online

To explore Linux system performance metrics in Application Insights, install collectd, together with its Application
Insights plug-in. This open-source solution gathers various system and network statistics.
Typically you'll use collectd if you have already instrumented your Java web service with Application Insights. It
gives you more data to help you to enhance your app's performance or diagnose problems.

Get your instrumentation key


In the Microsoft Azure portal, open the Application Insights resource where you want the data to appear. (Or create
a new resource.)
Take a copy of the instrumentation key, which identifies the resource.
Install collectd and the plug-in
On your Linux server machines:
1. Install collectd version 5.4.0 or later.
2. Download the Application Insights collectd writer plugin. Note the version number.
3. Copy the plugin JAR into /usr/share/collectd/java .
4. Edit /etc/collectd/[Link] :
Ensure that the Java plugin is enabled.
Update the JVMArg for the [Link] to include the following JAR. Update the version number to
match the one you downloaded:
/usr/share/collectd/java/[Link]
Add this snippet, using the Instrumentation Key from your resource:

LoadPlugin "[Link]"
<Plugin ApplicationInsightsWriter>
InstrumentationKey "Your key"
</Plugin>

Here's part of a sample configuration file:

...
# collectd plugins
LoadPlugin cpu
LoadPlugin disk
LoadPlugin load
...

# Enable Java Plugin


LoadPlugin "java"

# Configure Java Plugin


<Plugin "java">
JVMArg "-verbose:jni"
JVMArg "-[Link]=/usr/share/collectd/java/applicationinsights-collectd-
[Link]:/usr/share/collectd/java/[Link]"

# Enabling Application Insights plugin


LoadPlugin "[Link]"

# Configuring Application Insights plugin


<Plugin ApplicationInsightsWriter>
InstrumentationKey "12345678-1234-1234-1234-123456781234"
</Plugin>

# Other plugin configurations ...


...
</Plugin>
...

Configure other collectd plugins, which can collect various data from different sources.
Restart collectd according to its manual.

View the data in Application Insights


In your Application Insights resource, open Metrics Explorer and add charts, selecting the metrics you want to see
from the Custom category.

By default, the metrics are aggregated across all host machines from which the metrics were collected. To view the
metrics per host, in the Chart details blade, turn on Grouping and then choose to group by CollectD-Host.

To exclude upload of specific statistics


By default, the Application Insights plugin sends all the data collected by all the enabled collectd 'read' plugins.
To exclude data from specific plugins or data sources:
Edit the configuration file.
In <Plugin ApplicationInsightsWriter> , add directive lines like this:

DIRECTIVE EFFECT

Exclude disk Exclude all data collected by the disk plugin

Exclude disk:read,write Exclude the sources named read and write from the
disk plugin.

Separate directives with a newline.

Problems?
I don't see data in the portal
Open Search to see if the raw events have arrived. Sometimes they take longer to appear in metrics explorer.
You might need to set firewall exceptions for outgoing data
Enable tracing in the Application Insights plugin. Add this line within <Plugin ApplicationInsightsWriter> :
SDKLogger true
Open a terminal and start collectd in verbose mode, to see any issues it is reporting:
sudo collectd -f

Known issue
The Application Insights Write plugin is incompatible with certain Read plugins. Some plugins sometimes send
"NaN" where the Application Insights plugin expects a floating-point number.
Symptom: The collectd log shows errors that include "AI: ... SyntaxError: Unexpected token N".
Workaround: Exclude data collected by the problem Write plugins.
Monitor dependencies, exceptions and execution
times in Java web apps
11/1/2017 • 2 min to read • Edit Online

If you have instrumented your Java web app with Application Insights, you can use the Java Agent to get deeper
insights, without any code changes:
Dependencies: Data about calls that your application makes to other components, including:
REST calls made via HttpClient, OkHttp, and RestTemplate (Spring).
Redis calls made via the Jedis client. If the call takes longer than 10s, the agent also fetches the call
arguments.
JDBC calls - MySQL, SQL Server, PostgreSQL, SQLite, Oracle DB or Apache Derby DB. "executeBatch"
calls are supported. For MySQL and PostgreSQL, if the call takes longer than 10s, the agent reports the
query plan.
Caught exceptions: Data about exceptions that are handled by your code.
Method execution time: Data about the time it takes to execute specific methods.
To use the Java agent, you install it on your server. Your web apps must be instrumented with the Application
Insights Java SDK.

Install the Application Insights agent for Java


1. On the machine running your Java server, download the agent.
2. Edit the application server startup script, and add the following JVM:
javaagent: full path to the agent JAR file
For example, in Tomcat on a Linux machine:
export JAVA_OPTS="$JAVA_OPTS -javaagent:<full path to agent JAR file>"

3. Restart your application server.

Configure the agent


Create a file named [Link] and place it in the same folder as the agent JAR file.
Set the content of the xml file. Edit the following example to include or omit the features you want.
<?xml version="1.0" encoding="utf-8"?>
<ApplicationInsightsAgent>
<Instrumentation>

<!-- Collect remote dependency data -->


<BuiltIn enabled="true">
<!-- Disable Redis or alter threshold call duration above which arguments are sent.
Defaults: enabled, 10000 ms -->
<Jedis enabled="true" thresholdInMS="1000"/>

<!-- Set SQL query duration above which query plan is reported (MySQL, PostgreSQL). Default is
10000 ms. -->
<MaxStatementQueryLimitInMS>1000</MaxStatementQueryLimitInMS>
</BuiltIn>

<!-- Collect data about caught exceptions


and method execution times -->

<Class name="[Link]">
<Method name="methodOne"
reportCaughtExceptions="true"
reportExecutionTime="true"
/>

<!-- Report on the particular signature


void methodTwo(String, int) -->
<Method name="methodTwo"
reportExecutionTime="true"
signature="(Ljava/lang/String;I)V" />
</Class>

</Instrumentation>
</ApplicationInsightsAgent>

You have to enable reports exception and method timing for individual methods.
By default, reportExecutionTime is true and reportCaughtExceptions is false.

View the data


In the Application Insights resource, aggregated remote dependency and method execution times appears under
the Performance tile.
To search for individual instances of dependency, exception, and method reports, open Search.
Diagnosing dependency issues - learn more.

Questions? Problems?
No data? Set firewall exceptions
Troubleshooting Java
Filter telemetry in your Java web app
11/1/2017 • 3 min to read • Edit Online

Filters provide a way to select the telemetry that your Java web app sends to Application Insights. There are some
out-of-the-box filters that you can use, and you can also write your own custom filters.
The out-of-the-box filters include:
Trace severity level
Specific URLs, keywords or response codes
Fast responses - that is, requests to which your app responded to quickly
Specific event names

NOTE
Filters skew the metrics of your app. For example, you might decide that, in order to diagnose slow responses, you will set a
filter to discard fast response times. But you must be aware that the average response times reported by Application Insights
will then be slower than the true speed, and the count of requests will be smaller than the real count. If this is a concern, use
Sampling instead.

Setting filters
In [Link], add a TelemetryProcessors section like this example:
<ApplicationInsights>
<TelemetryProcessors>

<BuiltInProcessors>
<Processor type="TraceTelemetryFilter">
<Add name="FromSeverityLevel" value="ERROR"/>
</Processor>

<Processor type="RequestTelemetryFilter">
<Add name="MinimumDurationInMS" value="100"/>
<Add name="NotNeededResponseCodes" value="200-400"/>
</Processor>

<Processor type="PageViewTelemetryFilter">
<Add name="DurationThresholdInMS" value="100"/>
<Add name="NotNeededNames" value="home,index"/>
<Add name="NotNeededUrls" value=".jpg,.css"/>
</Processor>

<Processor type="TelemetryEventFilter">
<!-- Names of events we don't want to see -->
<Add name="NotNeededNames" value="Start,Stop,Pause"/>
</Processor>

<!-- Exclude telemetry from availability tests and bots -->


<Processor type="SyntheticSourceFilter">
<!-- Optional: specify which synthetic sources,
comma-separated
- default is all synthetics -->
<Add name="NotNeededSources" value="Application Insights Availability Monitoring,BingPreview"
</Processor>

</BuiltInProcessors>

<CustomProcessors>
<Processor type="[Link]">
<Add name="Successful" value="false"/>
</Processor>
</CustomProcessors>

</TelemetryProcessors>
</ApplicationInsights>

Inspect the full set of built-in processors.

Built-in filters
Metric Telemetry filter

<Processor type="MetricTelemetryFilter">
<Add name="NotNeeded" value="metric1,metric2"/>
</Processor>

NotNeeded - Comma-separated list of custom metric names.


Page View Telemetry filter
<Processor type="PageViewTelemetryFilter">
<Add name="DurationThresholdInMS" value="500"/>
<Add name="NotNeededNames" value="page1,page2"/>
<Add name="NotNeededUrls" value="url1,url2"/>
</Processor>

DurationThresholdInMS - Duration refers to the time taken to load the page. If this is set, pages that loaded faster
than this time are not reported.
NotNeededNames - Comma-separated list of page names.
NotNeededUrls - Comma-separated list of URL fragments. For example, "home" filters out all pages that have
"home" in the URL.
Request Telemetry Filter

<Processor type="RequestTelemetryFilter">
<Add name="MinimumDurationInMS" value="500"/>
<Add name="NotNeededResponseCodes" value="page1,page2"/>
<Add name="NotNeededUrls" value="url1,url2"/>
</Processor>

Synthetic Source filter


Filters out all telemetry that have values in the SyntheticSource property. These include requests from bots, spiders
and availability tests.
Filter out telemetry for all synthetic requests:

<Processor type="SyntheticSourceFilter" />

Filter out telemetry for specific synthetic sources:

<Processor type="SyntheticSourceFilter" >


<Add name="NotNeeded" value="source1,source2"/>
</Processor>

NotNeeded - Comma-separated list of synthetic source names.


Telemetry Event filter
Filters custom events (logged using TrackEvent()).

<Processor type="TelemetryEventFilter" >


<Add name="NotNeededNames" value="event1, event2"/>
</Processor>

NotNeededNames - Comma-separated list of event names.


Trace Telemetry filter
Filters log traces (logged using TrackTrace() or a logging framework collector).
<Processor type="TraceTelemetryFilter">
<Add name="FromSeverityLevel" value="ERROR"/>
</Processor>

FromSeverityLevel valid values are:


OFF - Filter out ALL traces
TRACE - No filtering. equals to Trace level
INFO - Filter out TRACE level
WARN - Filter out TRACE and INFO
ERROR - Filter out WARN, INFO, TRACE
CRITICAL - filter out all but CRITICAL

Custom filters
1. Code your filter
In your code, create a class that implements TelemetryProcessor :

package [Link];
import [Link];
import [Link];

public class SuccessFilter implements TelemetryProcessor {

/* Any parameters that are required to support the filter.*/


private final String successful;

/* Initializers for the parameters, named "setParameterName" */


public void setNotNeeded(String successful)
{
[Link] = successful;
}

/* This method is called for each item of telemetry to be sent.


Return false to discard it.
Return true to allow other processors to inspect it. */
@Override
public boolean process(Telemetry telemetry) {
if (telemetry == null) { return true; }
if (telemetry instanceof RequestTelemetry)
{
RequestTelemetry requestTelemetry = (RequestTelemetry)telemetry;
return [Link]() == successful;
}
return true;
}
}

2. Invoke your filter in the configuration file


In [Link]:
<ApplicationInsights>
<TelemetryProcessors>
<CustomProcessors>
<Processor type="[Link]">
<Add name="Successful" value="false"/>
</Processor>
</CustomProcessors>
</TelemetryProcessors>
</ApplicationInsights>

Troubleshooting
My filter isn't working.
Check that you have provided valid parameter values. For example, durations should be integers. Invalid values
will cause the filter to be ignored. If your custom filter throws an exception from a constructor or set method, it
will be ignored.

Next steps
Sampling - Consider sampling as an alternative that does not skew your metrics.
Monitor availability and responsiveness of any
web site
12/15/2017 • 13 min to read • Edit Online

After you've deployed your web app or web site to any server, you can set up tests to monitor its
availability and responsiveness. Azure Application Insights sends web requests to your application at
regular intervals from points around the world. It alerts you if your application doesn't respond, or
responds slowly.
You can set up availability tests for any HTTP or HTTPS endpoint that is accessible from the public
internet. You don't have to add anything to the web site you're testing. It doesn't even have to be your
site: you could test a REST API service on which you depend.
There are two types of availability tests:
URL ping test: a simple test that you can create in the Azure portal.
Multi-step web test: which you create in Visual Studio Enterprise and upload to the portal.
You can create up to 100 availability tests per application resource.

Open a resource for your availability test reports


If you have already configured Application Insights for your web app, open its Application Insights
resource in the Azure portal.
Or, if you want to see your reports in a new resource, sign up to Microsoft Azure, go to the Azure
portal, and create an Application Insights resource.

Click All resources to open the Overview blade for the new resource.

Create a URL ping test


Open the Availability blade and add a test.
The URL can be any web page you want to test, but it must be visible from the public internet. The
URL can include a query string. So, for example, you can exercise your database a little. If the URL
resolves to a redirect, we follow it up to 10 redirects.
Parse dependent requests: If this option is checked, the test requests images, scripts, style files,
and other files that are part of the web page under test. The recorded response time includes the
time taken to get these files. The test fails if all these resources cannot be successfully downloaded
within the timeout for the whole test.
If the option is not checked, the test only requests the file at the URL you specified.
Enable retries: If this option is checked, when the test fails, it is retried after a short interval. A failure
is reported only if three successive attempts fail. Subsequent tests are then performed at the usual test
frequency. Retry is temporarily suspended until the next success. This rule is applied independently at
each test location. We recommend this option. On average, about 80% of failures disappear on retry.
Test frequency: Sets how often the test is run from each test location. With a frequency of five
minutes and five test locations, your site is tested on average every minute.
Test locations are the places from where our servers send web requests to your URL. Choose more
than one so that you can distinguish problems in your website from network issues. You can select up
to 16 locations.
Success criteria:
Test timeout: Decrease this value to be alerted about slow responses. The test is counted as a
failure if the responses from your site have not been received within this period. If you selected
Parse dependent requests, then all the images, style files, scripts, and other dependent resources
must have been received within this period.
HTTP response: The returned status code that is counted as a success. 200 is the code that
indicates that a normal web page has been returned.
Content match: a string, like "Welcome!" We test that an exact case-sensitive match occurs in
every response. It must be a plain string, without wildcards. Don't forget that if your page content
changes you might have to update it.
Alerts are, by default, sent to you if there are failures in three locations over five minutes. A failure
in one location is likely to be a network problem, and not a problem with your site. But you can
change the threshold to be more or less sensitive, and you can also change who the emails should
be sent to.
You can set up a webhook that is called when an alert is raised. (But note that, at present, query
parameters are not passed through as Properties.)
Test more URLs
Add more tests. For example, In addition to testing your home page, you can make sure your database is
running by testing the URL for a search.

See your availability test results


After a few minutes, click Refresh to see test results.

The scatterplot shows samples of the test results that have diagnostic test-step detail in them. The test
engine stores diagnostic detail for tests that have failures. For successful tests, diagnostic details are
stored for a subset of the executions. Hover over any of the green/red dots to see the test timestamp, test
duration, location, and test name. Click through any dot in the scatter plot to see the details of the test
result.
Select a particular test, location, or reduce the time period to see more results around the time period of
interest. Use Search Explorer to see results from all executions, or use Analytics queries to run custom
reports on this data.
In addition to the raw results, there are two Availability metrics in Metrics Explorer:
1. Availability: Percentage of the tests that were successful, across all test executions.
2. Test Duration: Average test duration across all test executions.
You can apply filters on the test name, location to analyze trends of a particular test and/or location.
Inspect and edit tests
From the summary page, select a specific test. There, you can see its specific results, and edit or
temporarily disable it.

You might want to disable availability tests or the alert rules associated with them while you are
performing maintenance on your service.

If you see failures


Click a red dot.

From an availability test result, you can:


Inspect the response received from your server.
Diagnose failure with server side telemetry collected while processing the failed request instance.
Log an issue or work item in Git or VSTS to track the problem. The bug will contain a link to this event.
Open the web test result in Visual Studio.
Looks OK but reported as a failure? See FAQ for ways to reduce noise.

Multi-step web tests


You can monitor a scenario that involves a sequence of URLs. For example, if you are monitoring a sales
website, you can test that adding items to the shopping cart works correctly.

NOTE
There is a charge for multi-step web tests. Pricing scheme.

To create a multi-step test, you record the scenario by using Visual Studio Enterprise, and then upload the
recording to Application Insights. Application Insights replays the scenario at intervals and verifies the
responses.

NOTE
You can't use coded functions or loops in your tests. The test must be contained completely in the .webtest script.
However, you can use standard plugins.

1. Record a scenario
Use Visual Studio Enterprise to record a web session.
1. Create a Web performance test project.

Don't see the Web Performance and Load Test template? - Close Visual Studio Enterprise. Open
Visual Studio Installer to modify your Visual Studio Enterprise installation. Under Individual
Components, select Web Performance and load testing tools.
2. Open the .webtest file and start recording.
3. Do the user actions you want to simulate in your test: open your website, add a product to the cart,
and so on. Then stop your test.

Don't make a long scenario. There's a limit of 100 steps and 2 minutes.
4. Edit the test to:
Add validations to check the received text and response codes.
Remove any superfluous interactions. You could also remove dependent requests for
pictures or to ad or tracking sites.
Remember that you can only edit the test script - you can't add custom code or call other
web tests. Don't insert loops in the test. You can use standard web test plug-ins.
5. Run the test in Visual Studio to make sure it works.
The web test runner opens a web browser and repeats the actions you recorded. Make sure it
works as you expect.

2. Upload the web test to Application Insights


1. In the Application Insights portal, create a web test.
2. Select multi-step test, and upload the .webtest file.

Set the test locations, frequency, and alert parameters in the same way as for ping tests.
3. See the results
View your test results and any failures in the same way as single-url tests.
In addition, you can download the test results to view them in Visual Studio.
Too many failures?
A common reason for failure is that the test runs too long. It mustn't run longer than two minutes.
Don't forget that all the resources of a page must load correctly for the test to succeed, including
scripts, style sheets, images, and so forth.
The web test must be entirely contained in the .webtest script: you can't use coded functions in the
test.
Plugging time and random numbers into your multi-step test
Suppose you're testing a tool that gets time-dependent data such as stocks from an external feed. When
you record your web test, you have to use specific times, but you set them as parameters of the test,
StartTime and EndTime.

When you run the test, you'd like EndTime always to be the present time, and StartTime should be 15
minutes ago.
Web Test Plug-ins provide the way to do parameterize times.
1. Add a web test plug-in for each variable parameter value you want. In the web test toolbar, choose
Add Web Test Plugin.

In this example, we use two instances of the Date Time Plug-in. One instance is for "15 minutes
ago" and another for "now."
2. Open the properties of each plug-in. Give it a name and set it to use the current time. For one of
them, set Add Minutes = -15.

3. In the web test parameters, use {{plug-in name}} to reference a plug-in name.
Now, upload your test to the portal. It uses the dynamic values on every run of the test.

Dealing with sign-in


If your users sign in to your app, you have various options for simulating sign-in so that you can test
pages behind the sign-in. The approach you use depends on the type of security provided by the app.
In all cases, you should create an account in your application just for the purpose of testing. If possible,
restrict the permissions of this test account so that there's no possibility of the web tests affecting real
users.
Simple username and password
Record a web test in the usual way. Delete cookies first.
SAML authentication
Use the SAML plugin that is available for web tests.
Client secret
If your app has a sign-in route that involves a client secret, use that route. Azure Active Directory (AAD) is
an example of a service that provides a client secret sign-in. In AAD, the client secret is the App Key.
Here's a sample web test of an Azure web app using an app key:

1. Get token from AAD using client secret (AppKey).


2. Extract bearer token from response.
3. Call API using bearer token in the authorization header.
Make sure that the web test is an actual client - that is, it has its own app in AAD - and use its clientId +
appkey. Your service under test also has its own app in AAD: the appID URI of this app is reflected in the
web test in the “resource” field.
Open Authentication
An example of open authentication is signing in with your Microsoft or Google account. Many apps that
use OAuth provide the client secret alternative, so your first tactic should be to investigate that possibility.
If your test must sign in using OAuth, the general approach is:
Use a tool such as Fiddler to examine the traffic between your web browser, the authentication site,
and your app.
Perform two or more sign-ins using different machines or browsers, or at long intervals (to allow
tokens to expire).
By comparing different sessions, identify the token passed back from the authenticating site, that is
then passed to your app server after sign-in.
Record a web test using Visual Studio.
Parameterize the tokens, setting the parameter when the token is returned from the authenticator, and
using it in the query to the site. (Visual Studio attempts to parameterize the test, but does not correctly
parameterize the tokens.)

Performance tests
You can run a load test on your website. Like the availability test, you can send either simple requests or
multi-step requests from our points around the world. Unlike an availability test, many requests are sent,
simulating multiple simultaneous users.
From the Overview blade, open Settings, Performance Tests. When you create a test, you are invited to
connect to or create a Visual Studio Team Services account.
When the test is complete, you are shown response times and success rates.

TIP
To observe the effects of a performance test, use Live Stream and Profiler.
Automation
Use PowerShell scripts to set up an availability test automatically.
Set up a webhook that is called when an alert is raised.

Questions? Problems?
Intermittent test failure with a protocol violation error?
The error ("protocol violation..CR must be followed by LF") indicates an issue with the server (or
dependencies). This happens when malformed headers are set in the response. It can be caused by
load balancers or CDNs. Specifically, some headers might not be using CRLF to indicate end-of-
line, which violates the HTTP specification and therefore fail validation at the .NET WebRequest
level. Inspect the response to spot headers which might be in violation.
Note: The URL may not fail on browsers that have a relaxed validation of HTTP headers. See this
blog post for a detailed explanation of this issue: [Link]
linkedin-api-net-and-http-protocol-violations/
Site looks okay but I see test failures?
Check all the images, scripts, style sheets, and any other files loaded by the page. If any of
them fails, the test is reported as failed, even if the main html page loads OK. To desensitize
the test to such resource failures, simply uncheck the "Parse Dependent Requests" from the
test configuration.
To reduce odds of noise from transient network blips etc., ensure "Enable retries for test
failures" configuration is checked. You can also test from more locations and manage alert
rule threshold accordingly to prevent location specific issues causing undue alerts.
I don't see any related server side telemetry to diagnose test failures?
If you have Application Insights set up for your server-side application, that may be because
sampling is in operation.
Can I call code from my web test?
No. The steps of the test must be in the .webtest file. And you can't call other web tests or use
loops. But there are several plug-ins that you might find helpful.
Is HTTPS supported?
We support TLS 1.1 and TLS 1.2.
Is there a difference between "web tests" and "availability tests"?
The two terms may be referenced interchangeably. Availability tests is a more generic term that
includes the single URL ping tests in addition to the multi-step web tests.
I'd like to use availability tests on our internal server that runs behind a firewall.
There are two possible solutions:
Configure your firewall to permit incoming requests from the IP addresses of our web test
agents.
Write your own code to periodically test your internal server. Run the code as a background
process on a test server behind your firewall. Your test process can send its results to
Application Insights by using TrackAvailability() API in the core SDK package. This requires your
test server to have outgoing access to the Application Insights ingestion endpoint, but that is a
much smaller security risk than the alternative of permitting incoming requests. The results will
not appear in the availability web tests blades, but appears as availability results in Analytics,
Search, and Metric Explorer.
Uploading a multi-step web test fails
There's a size limit of 300 K.
Loops aren't supported.
References to other web tests aren't supported.
Data sources aren't supported.
My multi-step test doesn't complete
There's a limit of 100 requests per test.
The test is stopped if it runs longer than two minutes.
How can I run a test with client certificates?
We don't support that, sorry.

Next steps
Search diagnostic logs
Troubleshooting
IP addresses of web test agents
Set Alerts in Application Insights
11/1/2017 • 4 min to read • Edit Online

Azure Application Insights can alert you to changes in performance or usage metrics in your web app.
Application Insights monitors your live app on a wide variety of platforms to help you diagnose performance
issues and understand usage patterns.
There are three kinds of alerts:
Metric alerts tell you when a metric crosses a threshold value for some period - such as response times,
exception counts, CPU usage, or page views.
Web tests tell you when your site is unavailable on the internet, or responding slowly. Learn more.
Proactive diagnostics are configured automatically to notify you about unusual performance patterns.
We focus on metric alerts in this article.

Set a Metric alert


Open the Alert rules blade, and then use the add button.
Set the resource before the other properties. Choose the "(components)" resource if you want to set
alerts on performance or usage metrics.
The name that you give to the alert must be unique within the resource group (not just your application).
Be careful to note the units in which you're asked to enter the threshold value.
If you check the box "Email owners...", alerts are sent by email to everyone who has access to this resource
group. To expand this set of people, add them to the resource group or subscription (not the resource).
If you specify "Additional emails", alerts are sent to those individuals or groups (whether or not you checked
the "email owners..." box).
Set a webhook address if you have set up a web app that responds to alerts. It is called both when the alert is
Activated and when it is Resolved. (But note that at present, query parameters are not passed through as
webhook properties.)
You can Disable or Enable the alert: see the buttons at the top of the blade.
I don't see the Add Alert button.
Are you using an organizational account? You can set alerts if you have owner or contributor access to this
application resource. Take a look at the Access Control blade. Learn about access control.

NOTE
In the alerts blade, you see that there's already an alert set up: Proactive Diagnostics. The automatic alert monitors one
particular metric, request failure rate. Unless you decide to disable the proactive alert, you don't need to set your own
alert on request failure rate.

See your alerts


You get an email when an alert changes state between inactive and active.
The current state of each alert is shown in the Alert rules blade.
There's a summary of recent activity in the alerts drop-down:

The history of state changes is in the Activity Log:


How alerts work
An alert has three states: "Never activated", "Activated", and "Resolved." Activated means the condition you
specified was true, when it was last evaluated.
A notification is generated when an alert changes state. (If the alert condition was already true when you
created the alert, you might not get a notification until the condition goes false.)
Each notification generates an email if you checked the emails box, or provided email addresses. You can also
look at the Notifications drop-down list.
An alert is evaluated each time a metric arrives, but not otherwise.
The evaluation aggregates the metric over the preceding period, and then compares it to the threshold to
determine the new state.
The period that you choose specifies the interval over which metrics are aggregated. It doesn't affect how
often the alert is evaluated: that depends on the frequency of arrival of metrics.
If no data arrives for a particular metric for some time, the gap has different effects on alert evaluation
and on the charts in metric explorer. In metric explorer, if no data is seen for longer than the chart's
sampling interval, the chart shows a value of 0. But an alert based on the same metric is not be
reevaluated, and the alert's state remains unchanged.
When data eventually arrives, the chart jumps back to a non-zero value. The alert evaluates based on the
data available for the period you specified. If the new data point is the only one available in the period, the
aggregate is based just on that data point.
An alert can flicker frequently between alert and healthy states, even if you set a long period. This can happen
if the metric value hovers around the threshold. There is no hysteresis in the threshold: the transition to alert
happens at the same value as the transition to healthy.

What are good alerts to set?


It depends on your application. To start with, it's best not to set too many metrics. Spend some time looking at
your metric charts while your app is running, to get a feel for how it behaves normally. This practice helps you
find ways to improve its performance. Then set up alerts to tell you when the metrics go outside the normal
zone.
Popular alerts include:
Browser metrics, especially Browser page load times, are good for web applications. If your page has many
scripts, you should look for browser exceptions. In order to get these metrics and alerts, you have to set up
web page monitoring.
Server response time for the server side of web applications. As well as setting up alerts, keep an eye on
this metric to see if it varies disproportionately with high request rates: variation might indicate that your app
is running out of resources.
Server exceptions - to see them, you have to do some additional setup.
Don't forget that proactive failure rate diagnostics automatically monitor the rate at which your app responds to
requests with failure codes.

Automation
Use PowerShell to automate setting up alerts
Use webhooks to automate responding to alerts

Video

See also
Availability web tests
Automate setting up alerts
Proactive diagnostics
Smart Detection in Application Insights
11/1/2017 • 1 min to read • Edit Online

Smart Detection automatically warns you of potential performance problems in your web application. It performs
proactive analysis of the telemetry that your app sends to Application Insights. If there is a sudden rise in failure
rates, or abnormal patterns in client or server performance, you get an alert. This feature needs no configuration.
It operates if your application sends enough telemetry.
You can access Smart Detection alerts both from the emails you receive, and from the Smart Detection blade.

Review your Smart Detections


You can discover detections in two ways:
You receive an email from Application Insights. Here's a typical example:

Click the big button to open more detail in the portal.


The Smart Detection tile on your app's overview blade shows a count of recent alerts. Click the tile to see a
list of recent alerts.
Select an alert to see its details.

What problems are detected?


There are three kinds of detection:
Smart detection - Failure Anomalies. We use machine learning to set the expected rate of failed requests for
your app, correlating with load and other factors. If the failure rate goes outside the expected envelope, we
send an alert.
Smart detection - Performance Anomalies. You get notifications if response time of an operation or
dependency duration is slowing down compared to historical baseline or if we identify an anomalous pattern
in response time or page load time.
Smart detection - Azure Cloud Service issues. You get alerts if your app is hosted in Azure Cloud Services and
a role instance has startup failures, frequent recycling, or runtime crashes.
(The help links in each notification take you to the relevant articles.)

Video

Next steps
These diagnostic tools help you inspect the telemetry from your app:
Metric explorer
Search explorer
Analytics - powerful query language
Smart Detection is completely automatic. But maybe you'd like to set up some more alerts?
Manually configured metric alerts
Availability web tests
Smart Detection - Failure Anomalies
11/1/2017 • 8 min to read • Edit Online

Application Insights automatically notifies you in near real time if your web app experiences an abnormal rise in
the rate of failed requests. It detects an unusual rise in the rate of HTTP requests or dependency calls that are
reported as failed. For requests, failed requests are usually those with response codes of 400 or higher. To help you
triage and diagnose the problem, an analysis of the characteristics of the failures and related telemetry is provided
in the notification. There are also links to the Application Insights portal for further diagnosis. The feature needs no
set-up nor configuration, as it uses machine learning algorithms to predict the normal failure rate.
This feature works for Java and [Link] web apps, hosted in the cloud or on your own servers. It also works for
any app that generates request or dependency telemetry - for example, if you have a worker role that calls
TrackRequest() or TrackDependency().
After setting up Application Insights for your project, and provided your app generates a certain minimum amount
of telemetry, Smart Detection of failure anomalies takes 24 hours to learn the normal behavior of your app, before
it is switched on and can send alerts.
Here's a sample alert.
NOTE
By default, you get a shorter format mail than this example. But you can switch to this detailed format.

Notice that it tells you:


The failure rate compared to normal app behavior.
How many users are affected – so you know how much to worry.
A characteristic pattern associated with the failures. In this example, there’s a particular response code, request
name (operation) and app version. That immediately tells you where to start looking in your code. Other
possibilities could be a specific browser or client operating system.
The exception, log traces, and dependency failure (databases or other external components) that appear to be
associated with the characterized failures.
Links directly to relevant searches on the telemetry in Application Insights.

Benefits of Smart Detection


Ordinary metric alerts tell you there might be a problem. But Smart Detection starts the diagnostic work for you,
performing a lot of the analysis you would otherwise have to do yourself. You get the results neatly packaged,
helping you to get quickly to the root of the problem.

How it works
Smart Detection monitors the telemetry received from your app, and in particular the failure rates. This rule counts
the number of requests for which the Successful request property is false, and the number of dependency calls
for which the Successful call property is false. For requests, by default,
Successful request == (resultCode < 400) (unless you have written custom code to filter or generate your own
TrackRequest calls).
Your app’s performance has a typical pattern of behavior. Some requests or dependency calls will be more prone
to failure than others; and the overall failure rate may go up as load increases. Smart Detection uses machine
learning to find these anomalies.
As telemetry comes into Application Insights from your web app, Smart Detection compares the current behavior
with the patterns seen over the past few days. If an abnormal rise in failure rate is observed by comparison with
previous performance, an analysis is triggered.
When an analysis is triggered, the service performs a cluster analysis on the failed request, to try to identify a
pattern of values that characterize the failures. In the example above, the analysis has discovered that most failures
are about a specific result code, request name, Server URL host, and role instance. By contrast, the analysis has
discovered that the client operating system property is distributed over multiple values, and so it is not listed.
When your service is instrumented with these telemetry calls, the analyser looks for an exception and a
dependency failure that are associated with requests in the cluster it has identified, together with an example of
any trace logs associated with those requests.
The resulting analysis is sent to you as alert, unless you have configured it not to.
Like the alerts you set manually, you can inspect the state of the alert and configure it in the Alerts blade of your
Application Insights resource. But unlike other alerts, you don't need to set up or configure Smart Detection. If you
want, you can disable it or change its target email addresses.

Configure alerts
You can disable Smart Detection, change the email recipients, create a webhook, or opt in to more detailed alert
messages.
Open the Alerts page. Failure Anomalies is included along with any alerts that you have set manually, and you can
see whether it is currently in the alert state.

Click the alert to configure it.

Notice that you can disable Smart Detection, but you can't delete it (or create another one).
Detailed alerts
If you select "Get more detailed diagnostics" then the email will contain more diagnostic information. Sometimes
you'll be able to diagnose the problem just from the data in the email.
There's a slight risk that the more detailed alert could contain sensitive information, because it includes exception
and trace messages. However, this would only happen if your code could allow sensitive information into those
messages.

Triaging and diagnosing an alert


An alert indicates that an abnormal rise in the failed request rate was detected. It's likely that there is some
problem with your app or its environment.
From the percentage of requests and number of users affected, you can decide how urgent the issue is. In the
example above, the failure rate of 22.5% compares with a normal rate of 1%, indicates that something bad is going
on. On the other hand, only 11 users were affected. If it were your app, you'd be able to assess how serious that is.
In many cases, you will be able to diagnose the problem quickly from the request name, exception, dependency
failure and trace data provided.
There are some other clues. For example, the dependency failure rate in this example is the same as the exception
rate (89.3%). This suggests that the exception arises directly from the dependency failure - giving you a clear idea
of where to start looking in your code.
To investigate further, the links in each section will take you straight to a search page filtered to the relevant
requests, exception, dependency or traces. Or you can open the Azure portal, navigate to the Application Insights
resource for your app, and open the Failures blade.
In this example, clicking the 'View dependency failures details' link opens the Application Insights search blade. It
shows the SQL statement that has an example of the root cause: NULLs were provided at mandatory fields and did
not pass validation during the save operation.

Review recent alerts


Click Smart Detection to get to the most recent alert:
What's the difference ...
Smart Detection of failure anomalies complements other similar but distinct features of Application Insights.
Metric Alerts are set by you and can monitor a wide range of metrics such as CPU occupancy, request rates,
page load times, and so on. You can use them to warn you, for example, if you need to add more resources.
By contrast, Smart Detection of failure anomalies covers a small range of critical metrics (currently only
failed request rate), designed to notify you in near real time manner once your web app's failed request rate
increases significantly compared to web app's normal behavior.
Smart Detection automatically adjusts its threshold in response to prevailing conditions.
Smart Detection starts the diagnostic work for you.
Smart Detection of performance anomalies also uses machine intelligence to discover unusual patterns in your
metrics, and no configuration by you is required. But unlike Smart Detection of failure anomalies, the purpose
of Smart Detection of performance anomalies is to find segments of your usage manifold that might be badly
served - for example, by specific pages on a specific type of browser. The analysis is performed daily, and if any
result is found, it's likely to be much less urgent than an alert. By contrast, the analysis for failure anomalies is
performed continuously on incoming telemetry, and you will be notified within minutes if server failure rates
are greater than expected.

If you receive a Smart Detection alert


Why have I received this alert?
We detected an abnormal rise in failed requests rate compared to the normal baseline of the preceding period.
After analysis of the failures and associated telemetry, we think that there is a problem that you should look
into.
Does the notification mean I definitely have a problem?
We try to alert on app disruption or degradation, but only you can fully understand the semantics and the
impact on the app or users.
So, you guys look at my data?
No. The service is entirely automatic. Only you get the notifications. Your data is private.
Do I have to subscribe to this alert?
No. Every application that sends request telemetry has the Smart Detection alert rule.
Can I unsubscribe or get the notifications sent to my colleagues instead?
Yes, In Alert rules, click the Smart Detection rule to configure it. You can disable the alert, or change recipients
for the alert.
I lost the email. Where can I find the notifications in the portal?
In the Activity logs. In Azure, open the Application Insights resource for your app, then select Activity logs.
Some of the alerts are about known issues and I do not want to receive them.
We have alert suppression on our backlog.

Next steps
These diagnostic tools help you inspect the telemetry from your app:
Metric explorer
Search explorer
Analytics - powerful query language
Smart detections are completely automatic. But maybe you'd like to set up some more alerts?
Manually configured metric alerts
Availability web tests
Smart Detection - Performance Anomalies
1/3/2018 • 9 min to read • Edit Online

Application Insights automatically analyzes the performance of your web application, and can warn you about
potential problems. You might be reading this because you received one of our smart detection notifications.
This feature requires no special setup, other than configuring your app for Application Insights (on [Link], Java,
or [Link], and in web page code). It is active when your app generates enough telemetry.

When would I get a smart detection notification?


Application Insights has detected that the performance of your application has degraded in one of these ways:
Response time degradation - Your app has started responding to requests more slowly than it used to. The
change might have been rapid, for example because there was a regression in your latest deployment. Or it
might have been gradual, maybe caused by a memory leak.
Dependency duration degradation - Your app makes calls to a REST API, database, or other dependency. The
dependency is responding more slowly than it used to.
Slow performance pattern - Your app appears to have a performance issue that is affecting only some
requests. For example, pages are loading more slowly on one type of browser than others; or requests are being
served more slowly from one particular server. Currently, our algorithms look at page load times, request
response times, and dependency response times.
Smart Detection requires at least 8 days of telemetry at a workable volume in order to establish a baseline of
normal performance. So, after your application has been running for that period, any significant issue will result in
a notification.

Does my app definitely have a problem?


No, a notification doesn't mean that your app definitely has a problem. It's simply a suggestion about something
you might want to look at more closely.

How do I fix it?


The notifications include diagnostic information. Here's an example:
1. Triage. The notification shows you how many users or how many operations are affected. This can help you
assign a priority to the problem.
2. Scope. Is the problem affecting all traffic, or just some pages? Is it restricted to particular browsers or locations?
This information can be obtained from the notification.
3. Diagnose. Often, the diagnostic information in the notification will suggest the nature of the problem. For
example, if response time slows down when request rate is high, that suggests your server or dependencies
are overloaded.
Otherwise, open the Performance blade in Application Insights. There, you will find Profiler data. If
exceptions are thrown, you can also try the snapshot debugger.

Configure Email Notifications


Smart Detection notifications are enabled by default and sent to those who have owners, contributors and readers
access to the Application Insights resource. To change this, either click Configure in the email notification, or open
Smart Detection settings in Application Insights.
You can use the unsubscribe link in the Smart Detection email to stop receiving the email notifications.
Emails about Smart Detections performance anomalies are limited to one email per day per Application Insights
resource. The email will be sent only if there is at least one new issue that was detected on that day. You won't get
repeats of any message.

FAQ
So, Microsoft staff look at my data?
No. The service is entirely automatic. Only you get the notifications. Your data is private.
Do you analyze all the data collected by Application Insights?
Not at present. Currently, we analyze request response time, dependency response time and page load
time. Analysis of additional metrics is on our backlog looking forward.
What types of application does this work for?
These degradations are detected in any application that generates the appropriate telemetry. If you
installed Application Insights in your web app, then requests and dependencies are automatically tracked.
But in backend services or other apps, if you inserted calls to TrackRequest() or TrackDependency, then
Smart Detection will work in the same way.
Can I create my own anomaly detection rules or customize existing rules?
Not yet, but you can:
Set up alerts that tell you when a metric crosses a threshold.
Export telemetry to a database or to PowerBI, where you can analyze it yourself.
How often is the analysis performed?
We run the analysis daily on the telemetry from the previous day (full day in UTC timezone).
So does this replace metric alerts?
No. We don't commit to detecting every behavior that you might consider abnormal.
If I don't do anything in reponse to a notification, will I get a reminder?
No, you get a message about each issue only once. If the issue persist it will be updated in the Smart
Detection feed blade.
I lost the email. Where can I find the notifications in the portal?
In the Application Insights overview of your app, click the Smart Detection tile. There you'll be able to
find all notifications up to 90 days back.
How can I improve performance?
Slow and failed responses are one of the biggest frustrations for web site users, as you know from your own
experience. So, it's important to address the issues.
Triage
First, does it matter? If a page is always slow to load, but only 1% of your site's users ever have to look at it, maybe
you have more important things to think about. On the other hand, if only 1% of users open it, but it throws
exceptions every time, that might be worth investigating.
Use the impact statement (affected users or % of traffic) as a general guide, but be aware that it isn't the whole
story. Gather other evidence to confirm.
Consider the parameters of the issue. If it's geography-dependent, set up availability tests including that region:
there might simply be network issues in that area.
Diagnose slow page loads
Where is the problem? Is the server slow to respond, is the page very long, or does the browser have to do a lot of
work to display it?
Open the Browsers metric blade. The segmented display of browser page load time shows where the time is going.
If Send Request Time is high, either the server is responding slowly, or the request is a post with a lot of data.
Look at the performance metrics to investigate response times.
Set up dependency tracking to see whether the slowness is due to external services or your database.
If Receiving Response is predominant, your page and its dependent parts - JavaScript, CSS, images and so on
(but not asynchronously loaded data) are long. Set up an availability test, and be sure to set the option to load
dependent parts. When you get some results, open the detail of a result and expand it to see the load times of
different files.
High Client Processing time suggests scripts are running slowly. If the reason isn't obvious, consider adding
some timing code and send the times in trackMetric calls.
Improve slow pages
There's a web full of advice on improving your server responses and page load times, so we won't try to repeat it
all here. Here are a few tips that you probably already know about, just to get you thinking:
Slow loading because of big files: Load the scripts and other parts asynchronously. Use script bundling. Break
the main page into widgets that load their data separately. Don't send plain old HTML for long tables: use a
script to request the data as JSON or other compact format, then fill the table in place. There are great
frameworks to help with all this. (They also entail big scripts, of course.)
Slow server dependencies: Consider the geographical locations of your components. For example, if you're
using Azure, make sure the web server and the database are in the same region. Do queries retrieve more
information than they need? Would caching or batching help?
Capacity issues: Look at the server metrics of response times and request counts. If response times peak
disproportionately with peaks in request counts, it's likely that your servers are stretched.

Server Response Time Degradation


The response time degradation notification tells you:
The response time compared to normal response time for this operation.
How many users are affected.
Average response time and 90th percentile response time for this operation on the day of the detection and 7
days before.
Count of this operation requests on the day of the detection and 7 days before.
Correlation between degradation in this operation and degradations in related dependencies.
Links to help you diagnose the problem.
Profiler traces to help you view where operation time is spent (the link is available if Profiler trace
examples were collected for this operation during the detection period).
Performance reports in Metric Explorer, where you can slice and dice time range/filters for this operation.
Search for this calls to view specific calls properties.
Failure reports - If count > 1 this mean that there were failures in this operation that might have
contributed to performance degradation.

Dependency Duration Degradation


Modern application more and more adopt micro services design approach, which in many cases leads to heavy
reliability on external services. For example, if your application relies on some data platform or even if you build
your own bot service you will probably relay on some cognitive services provider to enable your bots to interact in
more human ways and some data store service for bot to pull the answers from.
Example dependency degradation notification:

Notice that it tells you:


The duration compared to normal response time for this operation
How many users are affected
Average duration and 90th percentile duration for this dependency on the day of the detection and 7 days
before
Number of dependency calls on the day of the detection and 7 days before
Links to help you diagnose the problem
Performance reports in Metric Explorer for this dependency
Search for this dependency calls to view calls properties
Failure reports - If count > 1 this mean that there were failed dependency calls during the detection
period that might have contributed to duration degradation.
Open Analytics with queries that calculate this dependency duration and count

Smart Detection of slow performing patterns


Application Insights finds performance issues that might only affect some portion of your users, or only affect
users in some cases. For example, notification about pages load is slower on one type of browser than on other
types of browsers, or if requests are served more slowly from a particular server. It can also discover problems
associated with combinations of properties, such as slow page loads in one geographical area for clients using
particular operating system.
Anomalies like these are very hard to detect just by inspecting the data, but are more common than you might
think. Often they only surface when your customers complain. By that time, it’s too late: the affected users are
already switching to your competitors!
Currently, our algorithms look at page load times, request response times at the server, and dependency response
times.
You don't have to set any thresholds or configure rules. Machine learning and data mining algorithms are used to
detect abnormal patterns.

When shows the time the issue was detected.


What describes:
The problem that was detected;
The characteristics of the set of events that we found displayed the problem behavior.
The table compares the poorly-performing set with the average behavior of all other events.
Click the links to open Metric Explorer and Search on relevant reports, filtered on the time and properties of the
slow performing set.
Modify the time range and filters to explore the telemetry.

Next steps
These diagnostic tools help you inspect the telemetry from your app:
Profiler
Snapshot debugger
Analytics
Analytics smart diagnostics
Smart detections are completely automatic. But maybe you'd like to set up some more alerts?
Manually configured metric alerts
Availability web tests
Degradation in trace severity ratio (preview)
12/8/2017 • 1 min to read • Edit Online

Traces are widely used in applications, as they help tell the story of what happens behind the scenes. When things
go wrong, traces provide crucial visibility into the sequence of events leading to the undesired state. While traces
are generally unstructured, there is one thing that can concretely be learned from them – their severity level. In an
application’s steady state, we would expect the ratio between “good” traces (Info and Verbose) and “bad” traces
(Warning, Error and Critical) to remain stable. The assumption is that “bad” traces may happen on a regular basis to
a certain extent due to any number of reasons (transient network issues for instance). But when a real problem
begins growing, it usually manifests as an increase in the relative proportion of “bad” traces vs “good” traces.
Application Insights Smart Detection automatically analyzes the traces logged by your application, and can warn
you about unusual patterns in the severity of your trace telemetry.
This feature requires no special setup, other than configuring trace logging for your app (see how to configure a
trace log listener for .NET or Java). It is active when your app generates enough exception telemetry.

When would I get this type of smart detection notification?


You might get this type of notification if the ratio between “good” traces (traces logged with a level of Info or
Verbose) and “bad” traces (traces logged with a level of Warning, Error, or *Fatal) is degrading in a specific day,
compared to a baseline calculated over the previous seven days.

Does my app definitely have a problem?


No, a notification doesn't mean that your app definitely has a problem. Although a degradation in the ratio between
“good” and “bad” traces might indicate an application issue, this change in ratio might be benign. For example, the
increase might be due to a new flow in the application emitting more “bad” traces than existing flows).

How do I fix it?


The notifications include diagnostic information to support in the diagnostics process:
1. Triage. The notification shows you how many operations are affected. This can help you assign a priority to the
problem.
2. Scope. Is the problem affecting all traffic, or just some operation? This information can be obtained from the
notification.
3. Diagnose. You can use the related items and reports linking to supporting information, to help you further
diagnose the issue.
Abnormal rise in exception volume (preview)
12/8/2017 • 1 min to read • Edit Online

Application Insights automatically analyzes the exceptions thrown in your application, and can warn you about
unusual patterns in your exception telemetry.
This feature requires no special setup, other than configuring exception reporting for your app. It is active when
your app generates enough exception telemetry.

When would I get this type of smart detection notification?


You might get this type of notification if your app is exhibiting an abnormal rise in the number of exceptions of a
specific type during a day, compared to a baseline calculated over the previous seven days. Machine learning
algorithms are being used for detecting the rise in exception count, while taking into account a natural growth in
your application usage.

Does my app definitely have a problem?


No, a notification doesn't mean that your app definitely has a problem. Although an excessive number of exceptions
usually indicates an application issue, these exceptions might be benign and handled correctly by your application.

How do I fix it?


The notifications include diagnostic information to support in the diagnostics process:
1. Triage. The notification shows you how many users or how many requests are affected. This can help you
assign a priority to the problem.
2. Scope. Is the problem affecting all traffic, or just some operation? This information can be obtained from the
notification.
3. Diagnose. The detection contains information about the method from which the exception was thrown, as well
as the exception type. You can also use the related items and reports linking to supporting information, to help
you further diagnose the issue.
Memory leak detection (preview)
1/12/2018 • 1 min to read • Edit Online

Application Insights automatically analyzes the memory consumption of each process in your application, and can
warn you about potential memory leaks or increased memory consumption.
This feature requires no special setup, other than configuring performance counters for your app. It is active when
your app generates enough memory performance counters telemetry (for example, Private Bytes).

When would I get this type of smart detection notification?


A typical notification will follow a consistent increase in memory consumption over a long period of time (a few
hours), in one or more processes and/or one or more machines, which are part of your application. Machine
learning algorithms are used for detecting increased memory consumption that matches a pattern of a memory
leak, in contrast to increased memory consumption due to naturally increasing application usage.

Does my app definitely have a problem?


No, a notification doesn't mean that your app definitely has a problem. Although memory leak patterns usually
indicate an application issue, these patterns could be typical to your specific process, or could have a natural
business justification, and can be ignored.

How do I fix it?


The notifications include diagnostic information to support in the diagnostic analysis process:
1. Triage. The notification shows you the amount of memory increase (in GB), and the time range in which the
memory has increased. This can help you assign a priority to the problem.
2. Scope. How many machines exhibited the memory leak pattern? How many exceptions were triggered during
the potential memory leak? This information can be obtained from the notification.
3. Diagnose. The detection contains the memory leak pattern, showing memory consumption of the process over
time. You can also use the related items and reports linking to supporting information, to help you further
diagnose the issue.
Low utilization of cloud resources (preview)
1/12/2018 • 1 min to read • Edit Online

Application Insights automatically analyzes the CPU consumption of each role instance in your application and
detects instances with low CPU utilization. This detection enables you to decrease your Azure resources and reduce
costs, by decreasing the number of role instances each role utilizes, or by decreasing the number of roles.
This feature requires no special setup, other than configuring performance counters for your app. It is active when
your app generates enough CPU performance counter telemetry (% Processor Time).

When would I get this type of smart detection notification?


A typical notification occurs when many of your Web/Worker Role instances exhibit low CPU utilization.

Does my app definitely consume too many resources?


No, a notification doesn't mean that your app definitely consumes too many resources. Although such patterns of
low CPU utilization usually indicate that resource consumption could be decreased, this behavior could be typical to
your specific role, or could have a natural business justification, and can be ignored. For example, it could be that
multiple instances are needed for other resources, such as memory/network, and not CPU.

How do I fix it?


The notifications include diagnostic information to support in the diagnostics process:
1. Triage. The notification shows you the roles in your app that exhibit low CPU utilization. This can help you
assign a priority to the problem.
2. Scope. How many roles exhibited low CPU utilization, and how many instances in each role utilize low CPU?
This information can be obtained from the notification.
3. Diagnose. The detection contains the percentage of CPU utilized, showing CPU utilization of each instance over
time. You can also use the related items and reports linking to supporting information, such as percentiles of
CPU utilization, to help you further diagnose the issue.
Application security detection pack (preview)
1/12/2018 • 2 min to read • Edit Online

Application Insights automatically analyzes the telemetry generated by your application and detects potential
security issues. This capability enables you to identify potential security problems, and handle them by fixing the
application or by taking the necessary security measures.
This feature requires no special setup, other than configuring your app to send telemetry.

When would I get this type of smart detection notification?


There are three types of security issues that are detected:
1. Insecure URL access: a URL in the application is being accessed via both HTTP and HTTPS. Typically, a URL that
accepts HTTPS requests should not accept HTTP requests. This may indicate a bug or security issue in your
application.
2. Insecure form: a form (or other "POST" request) in the application uses HTTP instead of HTTPS. Using HTTP can
compromise the user data that is sent by the form.
3. Suspicious user activity: the application is being accessed from multiple countries by the same user at
approximately the same time. For example, the same user accessed the application from Spain and the United
States within the same hour. This detection indicates a potentially malicious access attempt to your application.

Does my app definitely have a security issue?


No, a notification doesn't mean that your app definitely has a security issue. A detection of any of the scenarios
above can, in many cases, indicate a security issue. However, the detection may have a natural business justification,
and can be ignored.

How do I fix the "Insecure URL access" detection?


1. Triage. The notification provides the number of users who accessed insecure URLs, and the URL that was most
affected by insecure access. This can help you assign a priority to the problem.
2. Scope. What percentage of the users accessed insecure URLs? How many URLs were affected? This information
can be obtained from the notification.
3. Diagnose. The detection provides the list of insecure requests, and the lists of URLs and users that were
affected, to help you further diagnose the issue.

How do I fix the "Insecure form" detection?


1. Triage. The notification provides the number of insecure forms and number of users whose data was potentially
compromised. This can help you assign a priority to the problem.
2. Scope. Which form was involved in the largest number of insecure transmissions, and what is the distribution of
insecure transmissions over time? This information can be obtained from the notification.
3. Diagnose. The detection provides the list of insecure forms and a breakdown of the number of insecure
transmissions for each form, to help you further diagnose the issue.

How do I fix the "Suspicious user activity" detection?


1. Triage. The notification provides the number of different users that exhibited the suspicious behavior. This can
help you assign a priority to the problem.
2. Scope. From which countries did the suspicious requests originate? Which user was the most suspicious? This
information can be obtained from the notification.
3. Diagnose. The detection provides the list of suspicious users and the list of countries for each user, to help you
further diagnose the issue.
Create an Application Insights resource
11/1/2017 • 2 min to read • Edit Online

Azure Application Insights displays data about your application in a Microsoft Azure resource. Creating a new
resource is therefore part of setting up Application Insights to monitor a new application. In many cases, creating
a resource can be done automatically by the IDE. But in some cases, you create a resource manually - for
example, to have separate resources for development and production builds of your application.
After you have created the resource, you get its instrumentation key and use that to configure the SDK in the
application. The resource key links the telemetry to the resource.

Sign up to Microsoft Azure


If you haven't got a Microsoft account, get one now. (If you use services like [Link], OneDrive, Windows
Phone, or XBox Live, you already have a Microsoft account.)
You also need a subscription to Microsoft Azure. If your team or organization has an Azure subscription, the
owner can add you to it, using your Windows Live ID. You're only charged for what you use. The default basic
plan allows for a certain amount of experimental use free of charge.
When you've got access to a subscription, log in to Application Insights at [Link] and use your
Live ID to login.

Create an Application Insights resource


In the [Link], add an Application Insights resource:

Application type affects what you see on the overview blade and the properties available in metric explorer.
If you don't see your type of app, choose General.
Subscription is your payment account in Azure.
Resource group is a convenience for managing properties like access control. If you have already created
other Azure resources, you can choose to put this new resource in the same group.
Location is where we keep your data.
Pin to dashboard puts a quick-access tile for your resource on your Azure Home page. Recommended.
When your app has been created, a new blade opens. This blade is where you see performance and usage data
about your app.
To get back to it next time you log in to Azure, look for your app's quick-start tile on the start board (home
screen). Or click Browse to find it.

Copy the instrumentation key


The instrumentation key identifies the resource that you created. You need it to give to the SDK.

Install the SDK in your app


Install the Application Insights SDK in your app. This step depends heavily on the type of your application.
Use the instrumentation key to configure the SDK that you install in your application.
The SDK includes standard modules that send telemetry without you having to write any code. To track user
actions or diagnose issues in more detail, use the API to send your own telemetry.

See telemetry data


Close the quick start blade to return to your application blade in the Azure portal.
Click the Search tile to see Diagnostic Search, where the first events appear.
If you're expecting more data, click Refresh after a few seconds .

Creating a resource automatically


You can write a PowerShell script to create a resource automatically.

Next steps
Create a dashboard
Diagnostic Search
Explore metrics
Write Analytics queries
Navigation and Dashboards in the Application
Insights portal
12/8/2017 • 5 min to read • Edit Online

After you have set up Application Insights on your project, telemetry data about your app's performance and
usage will appear in your project's Application Insights resource in the Azure portal.

Find your telemetry


Sign in to the Azure portal and navigate to the Application Insights resource that you created for your app.

The overview blade (page) for your app shows a summary of the key diagnostic metrics of your app, and is a
gateway to the other features of the portal.
You can customize any of the charts and grids and pin them to a dashboard. That way, you can bring together
the key telemetry from different apps on a central dashboard.

Dashboards
The first thing you see after you sign in to the Microsoft Azure portal is a dashboard. Here you can bring
together the charts that are most important to you across all your Azure resources, including telemetry from
Azure Application Insights.
1. Navigate to specific resources such as your app in Application Insights: Use the left bar.
2. Return to the current dashboard, or switch to other recent views: Use the drop-down menu at top left.
3. Switch dashboards: Use the drop-down menu on the dashboard title
4. Create, edit, and share dashboards in the dashboard toolbar.
5. Edit the dashboard: Hover over a tile and then use its top bar to move, customize, or remove it.

Add to a dashboard
When you're looking at a blade or set of charts that's particularly interesting, you can pin a copy of it to the
dashboard. You'll see it next time you return there.

1. Pin chart to dashboard. A copy of the chart appears on the dashboard.


2. Pin the whole blade to the dashboard - it appears on the dashboard as a tile that you can click through.
3. Click the top left corner to return to the current dashboard. Then you can use the drop-down menu to
return to the current view.
Notice that charts are grouped into tiles: a tile can contain more than one chart. You pin the whole tile to the
dashboard.
The chart is automatically refreshed with a frequency that depends on the chart's time range:
Time range up to 1 hour: Refresh every 5 minutes
Time range 1 - 24 hours: Refresh every 15 minutes
Time range above 24 hours: (Time range)/60.
Pin any query in Analytics
You can also pin Analytics charts to a shared dashboard. This allows you to add charts of any arbitrary query
alongside the standard metrics.
Results are automatically recalculated every hour. Click the Refresh icon on the chart to recalculate
immediately. (Browser refresh doesn't recalculate.)

Adjust a tile on the dashboard


Once a tile is on the dashboard, you can adjust it.

1. Add a chart to the tile.


2. Set the metric, group-by dimension and style (table, graph) of a chart.
3. Drag across the diagram to zoom in; click the undo button to reset the timespan; set filter properties for the
charts on the tile.
4. Set tile title.
Tiles pinned from metric explorer blades have more editing options than tiles pinned from an Overview blade.
The original tile that you pinned isn't affected by your edits.

Switch between dashboards


You can save more than one dashboard and switch between them. When you pin a chart or blade, they're
added to the current dashboard.
For example, you might have one dashboard for displaying full screen in the team room, and another for
general development.
On the dashboard, a blade appears as a tile: click it to go to the blade. A chart replicates the chart in its original
location.

Share dashboards
When you've created a dashboard, you can share it with other users.
Learn about Roles and access control.

Create dashboards programmatically


You can automate dashboard creation using Azure Resource Manager and a simple JSON editor.

App navigation
The overview blade is the gateway to more information about your app.
Any chart or tile - Click any tile or chart to see more detail about what it displays.
Overview blade buttons

Metrics Explorer - Create your own charts of performance and usage.


Search - Investigate specific instances of events such as requests, exceptions, or log traces.
Analytics - Powerful queries over your telemetry.
Time range - Adjust the range displayed by all the charts on the blade.
Delete - Delete the Application Insights resource for this app. You should also either remove the
Application Insights packages from your app code, or edit the instrumentation key in your app to direct
telemetry to a different Application Insights resource.
Essentials tab
Instrumentation key - Identifies this app resource.
Pricing - Make features available and set volume caps.
App navigation bar
Overview - Return to the app overview blade.
Activity log - Alerts and Azure administrative events.
Access control - Provide access to team members and others.
Tags - Use tags to group your app with others.
INVESTIGATE
Application map - Active map showing the components of your application, derived from the dependency
information.
Smart Detection - Review recent performance alerts.
Live Stream - A fixed set of near-instant metrics, useful when deploying a new build or debugging.
Availability / Web tests - Send regular requests to your web app from around the world.*
Failures, Performance - Exceptions, failure rates and response times for requests to your app and for
requests from your app to dependencies.
Performance - Response time, dependency response times.
Servers - Performance counters. Available if you install Status Monitor.
Browser - Page view and AJAX performance. Available if you instrument your web pages.
Usage - Page view, user, and session counts. Available if you instrument your web pages.
CONFIGURE
Getting started - inline tutorial.
Properties - instrumentation key, subscription and resource id.
Alerts - metric alert configuration.
Continuous export - configure export of telemetry to Azure storage.
Performance testing - set up a synthetic load on your website.
Quota and pricing and ingestion sampling.
API Access - Create release annotations and for the Data Access API.
Work Items - Connect to a work tracking system so that you can create bugs while inspecting telemetry.
SETTINGS
Locks - lock Azure resources
Automation script - export a definition of the Azure resource so that you can use it as a template to create
new resources.

Video

Next steps
Metrics explorer
Filter and segment metrics

Diagnostic search
Find and inspect events, related events, and create bugs

Analytics
Powerful query language
Using Search in Application Insights
11/1/2017 • 5 min to read • Edit Online

Search is a feature of Application Insights that you use to find and explore individual telemetry items,
such as page views, exceptions, or web requests. And you can view log traces and events that you
have coded.
(For more complex queries over your data, use Analytics.)

Where do you see Search?


In the Azure portal
You can open diagnostic search explicitly from the Application Insights Overview blade of your
application:

It also opens when you click through some charts and grid items. In this case, its filters are pre-set to
focus on the type of item you selected.
For example, on the Overview blade, there's a bar chart of requests classified by response time. Click
through a performance range to see a list of individual requests in that response time range:
The main body of Diagnostic Search is a list of telemetry items - server requests, page views, custom
events that you have coded, and so on. At the top of the list is a summary chart showing counts of
events over time.
Click Refresh to get new events.
In Visual Studio
In Visual Studio, there's also an Application Insights Search window. It's most useful for displaying
telemetry events generated by the application that you're debugging. But it can also show the events
collected from your published app at the Azure portal.
Open the Search window in Visual Studio:

The Search window has features similar to the web portal:


The Track Operation tab is available when you open a request or a page view. An 'operation' is a
sequence of events that is associated with to a single request or page view. For example, dependency
calls, exceptions, trace logs, and custom events might be part of a single operation. The Track
Operation tab shows graphically the timing and duration of these events in relation to the request or
page view.

Inspect individual items


Select any telemetry item to see key fields and related items. If you want to see the full set of fields,
click "...".
Filter event types
Open the Filter blade and choose the event types you want to see. (If, later, you want to restore the
filters with which you opened the blade, click Reset.)

The event types are:


Trace - Diagnostic logs including TrackTrace, log4Net, NLog, and [Link] calls.
Request - HTTP requests received by your server application, including pages, scripts, images, style
files, and data. These events are used to create the request and response overview charts.
Page View - Telemetry sent by the web client, used to create page view reports.
Custom Event - If you inserted calls to TrackEvent() in order to monitor usage, you can search
them here.
Exception - Uncaught exceptions in the server, and those that you log by using TrackException().
Dependency - Calls from your server application to other services such as REST APIs or
databases, and AJAX calls from your client code.
Availability - Results of availability tests.

Filter on property values


You can filter events on the values of their properties. The available properties depend on the event
types you selected.
For example, pick out requests with a specific response code.

Choosing no values of a particular property has the same effect as choosing all values. It switches off
filtering on that property.
Narrow your search
Notice that the counts to the right of the filter values show how many occurrences there are in the
current filtered set.
In this example, it's clear that the 'Rpt/Employees' request results in most of the '500' errors:
Find events with the same property
Find all the items with the same property value:

Search the data


NOTE
To write more complex queries, open Analytics from the top of the Search blade.

You can search for terms in any of the property values. This is particularly useful if you have written
custom events with property values.
You might want to set a time range, as searches over a shorter range are faster.

Search for complete words, not substrings. Use quotation marks to enclose special characters.

STRING IS NOT FOUND BY BUT THESE DO FIND IT

[Link] home homecontroller


controller about
out "[Link]"

United States Uni united


ted states
united AND states
"united states"

Here are the search expressions you can use:

SAMPLE QUERY EFFECT

apple Find all events in the time range whose fields include
the word "apple"

apple AND banana Find events that contain both words. Use capital
"AND", not "and".
SAMPLE QUERY EFFECT

apple OR banana Find events that contain either word. Use "OR", not
apple banana "or".
Short form.

apple NOT banana Find events that contain one word but not the other.

Sampling
If your app generates a lot of telemetry (and you are using the [Link] SDK version 2.0.0-beta3 or
later), the adaptive sampling module automatically reduces the volume that is sent to the portal by
sending only a representative fraction of events. However, events that are related to the same request
are selected or deselected as a group, so that you can navigate between related events.
Learn about sampling.

Create work item


You can create a bug in GitHub or Visual Studio Team Services with the details from any telemetry
item.

The first time you do this, you are asked to configure a link to your Team Services account and project.
(You can also configure the link on the Work Items blade.)

Save your search


When you've set all the filters you want, you can save the search as a favorite. If you work in an
organizational account, you can choose whether to share it with other team members.
To see the search again, go to the overview blade and open Favorites:

If you saved with Relative time range, the re-opened blade has the latest data. If you saved with
Absolute time range, you see the same data every time. (If 'Relative' isn't available when you want to
save a favorite, click Time Range in the header, and set a time range that isn't a custom range.)

Send more telemetry to Application Insights


In addition to the out-of-the-box telemetry sent by Application Insights SDK, you can:
Capture log traces from your favorite logging framework in .NET or Java. This means you can
search through your log traces and correlate them with page views, exceptions, and other events.
Write code to send custom events, page views, and exceptions.
Learn how to send logs and custom telemetry to Application Insights.

Q&A
How much data is retained?
See the Limits summary.
How can I see POST data in my server requests?
We don't log the POST data automatically, but you can use TrackTrace or log calls. Put the POST data
in the message parameter. You can't filter on the message in the same way you can filter on
properties, but the size limit is longer.

Video

Next steps
Write complex queries in Analytics
Send logs and custom telemetry to Application Insights
Set up availability and responsiveness tests
Troubleshooting
Exploring Metrics in Application Insights
11/1/2017 • 7 min to read • Edit Online

Metrics in Application Insights are measured values and counts of events that are sent in telemetry from
your application. They help you detect performance issues and watch trends in how your application is
being used. There's a wide range of standard metrics, and you can also create your own custom metrics
and events.
Metrics and event counts are displayed in charts of aggregated values such as sums, averages, or counts.
Here's a sample set of charts:

You find metrics charts everywhere in the Application Insights portal. In most cases, they can be
customized, and you can add more charts to the blade. From the Overview blade, click through to more
detailed charts (which have titles such as "Servers"), or click Metrics Explorer to open a new blade where
you can create custom charts.

Time range
You can change the Time range covered by the charts or grids on any blade.

If you're expecting some data that hasn't appeared yet, click Refresh. Charts refresh themselves at
intervals, but the intervals are longer for larger time ranges. It can take a while for data to come through
the analysis pipeline onto a chart.
To zoom into part of a chart, drag over it:
Click the Undo Zoom button to restore it.

Granularity and point values


Hover your mouse over the chart to display the values of the metrics at that point.

The value of the metric at a particular point is aggregated over the preceding sampling interval.
The sampling interval or "granularity" is shown at the top of the blade.

You can adjust the granularity in the Time range blade:


The granularities available depend on the time range you select. The explicit granularities are alternatives
to the "automatic" granularity for the time range.

Editing charts and grids


To add a new chart to the blade:

Select Edit on an existing or new chart to edit what it shows:

You can display more than one metric on a chart, though there are restrictions about the combinations
that can be displayed together. As soon as you choose one metric, some of the others are disabled.
If you coded custom metrics into your app (calls to TrackMetric and TrackEvent) they will be listed here.

Segment your data


You can split a metric by property - for example, to compare page views on clients with different
operating systems.
Select a chart or grid, switch on grouping and pick a property to group by:
NOTE
When you use grouping, the Area and Bar chart types provide a stacked display. This is suitable where the
Aggregation method is Sum. But where the aggregation type is Average, choose the Line or Grid display types.

If you coded custom metrics into your app and they include property values, you'll be able to select the
property in the list.
Is the chart too small for segmented data? Adjust its height:

Aggregation types
The legend at the side by default usually shows the aggregated value over the period of the chart. If you
hover over the chart, it shows the value at that point.
Each data point on the chart is an aggregate of the data values received in the preceding sampling interval
or "granularity". The granularity is shown at the top of the blade, and varies with the overall timescale of
the chart.
Metrics can be aggregated in different ways:
Count is a count of the events received in the sampling interval. It is used for events such as requests.
Variations in the height of the chart indicates variations in the rate at which the events occur. But note
that the numeric value changes when you change the sampling interval.
Sum adds up the values of all the data points received over the sampling interval, or the period of the
chart.
Average divides the Sum by the number of data points received over the interval.
Unique counts are used for counts of users and accounts. Over the sampling interval, or over the
period of the chart, the figure shows the count of different users seen in that time.
% - percentage versions of each aggregation are used only with segmented charts. The total always
adds up to 100%, and the chart shows the relative contribution of different components of a total.

Change the aggregation type


The default method for each metric is shown when you create a new chart or when all metrics are
deselected:

Pin Y-axis
By default a chart shows Y axis values starting from zero till maximum values in the data range, to give a
visual representation of quantum of the values. But in some cases more than the quantum it might be
interesting to visually inspect minor changes in values. For customizations like this use the Y-axis range
editing feature to pin the Y-axis minimum or maximum value at desired place. Click on "Advanced
Settings" check box to bring up the Y-axis range Settings

Filter your data


To see just the metrics for a selected set of property values:
If you don't select any values for a particular property, it's the same as selecting them all: there is no filter
on that property.
Notice the counts of events alongside each property value. When you select values of one property, the
counts alongside other property values are adjusted.
Filters apply to all the charts on a blade. If you want different filters applied to different charts, create and
save different metrics blades. If you want, you can pin charts from different blades to the dashboard, so
that you can see them alongside each other.
Remove bot and web test traffic
Use the filter Real or synthetic traffic and check Real.
You can also filter by Source of synthetic traffic.
To add properties to the filter list
Would you like to filter telemetry on a category of your own choosing? For example, maybe you divide up
your users into different categories, and you would like segment your data by these categories.
Create your own property. Set it in a Telemetry Initializer to have it appear in all telemetry - including the
standard telemetry sent by different SDK modules.

Edit the chart type


Notice that you can switch between grids and graphs:
Save your metrics blade
When you've created some charts, save them as a favorite. You can choose whether to share it with other
team members, if you use an organizational account.

To see the blade again, go to the overview blade and open Favorites:

If you chose Relative time range when you saved, the blade will be updated with the latest metrics. If you
chose Absolute time range, it will show the same data every time.

Reset the blade


If you edit a blade but then you'd like to get back to the original saved set, just click Reset.
Live metrics stream
For a much more immediate view of your telemetry, open Live Stream. Most metrics take a few minutes
to appear, because of the process of aggregation. By contrast, live metrics are optimized for low latency.

Set alerts
To be notified by email of unusual values of any metric, add an alert. You can choose either to send the
email to the account administrators, or to specific email addresses.

Learn more about alerts.

Continuous Export
If you want data continuously exported so that you can process it externally, consider using Continuous
export.
Power BI
If you want even richer views of your data, you can export to Power BI.
Analytics
Analytics is a more versatile way to analyze your telemetry using a powerful query language. Use it if you
want to combine or compute results from metrics, or perform an in-depth exploration of your app's
recent performance.
From a metric chart, you can click the Analytics icon to get directly to the equivalent Analytics query.

Troubleshooting
I don't see any data on my chart.
Filters apply to all the charts on the blade. Make sure that, while you're focusing on one chart, you
didn't set a filter that excludes all the data on another.
If you want to set different filters on different charts, create them in different blades, save them as
separate favorites. If you want, you can pin them to the dashboard so that you can see them
alongside each other.
If you group a chart by a property that is not defined on the metric, then there will be nothing on the
chart. Try clearing 'group by', or choose a different grouping property.
Performance data (CPU, IO rate, and so on) is available for Java web services, Windows desktop apps,
IIS web apps and services if you install status monitor, and Azure Cloud Services. It isn't available for
Azure websites.

Video

Next steps
Monitoring usage with Application Insights
Using Diagnostic Search
Live Metrics Stream: Monitor & Diagnose with 1-
second latency
11/1/2017 • 5 min to read • Edit Online

Probe the beating heart of your live, in-production web application by using Live Metrics Stream from Application
Insights. Select and filter metrics and performance counters to watch in real time, without any disturbance to your
service. Inspect stack traces from sample failed requests and exceptions. Together with Profiler, Snapshot
debugger, and performance testing, Live Metrics Stream provides a powerful and non-invasive diagnostic tool for
your live web site.
With Live Metrics Stream, you can:
Validate a fix while it is released, by watching performance and failure counts.
Watch the effect of test loads, and diagnose issues live.
Focus on particular test sessions or filter out known issues, by selecting and filtering the metrics you want to
watch.
Get exception traces as they happen.
Experiment with filters to find the most relevant KPIs.
Monitor any Windows performance counter live.
Easily identify a server that is having issues, and filter all the KPI/live feed to just that server.

Live Metrics Stream is currently available on [Link] apps running on-premises or in the Cloud.
Get started
1. If you haven't yet installed Application Insights in your [Link] web app or Windows server app, do that now.
2. Update to the latest version of the Application Insights package. In Visual Studio, right-click your project
and choose Manage Nuget packages. Open the Updates tab, check Include prerelease, and select all
the [Link].* packages.
Redeploy your app.
3. In the Azure portal, open the Application Insights resource for your app, and then open Live Stream.
4. Secure the control channel if you might use sensitive data such as customer names in your filters.

No data? Check your server firewall


Check the outgoing ports for Live Metrics Stream are open in the firewall of your servers.

How does Live Metrics Stream differ from Metrics Explorer and
Analytics?
LIVE STREAM METRICS EXPLORER AND ANALYTICS

Latency Data displayed within one second Aggregated over minutes

No retention Data persists while it's on the chart, Data retained for 90 days
and is then discarded

On demand Data is streamed while you open Live Data is sent whenever the SDK is
Metrics installed and enabled

Free There is no charge for Live Stream data Subject to pricing

Sampling All selected metrics and counters are Events may be sampled
transmitted. Failures and stack traces
are sampled. TelemetryProcessors are
not applied.

Control channel Filter control signals are sent to the Communication is one-way, to the
SDK. We recommend you secure this portal
channel.

Select and filter your metrics


(Available on classic [Link] apps with the latest SDK.)
You can monitor custom KPI live by applying arbitrary filters on any Application Insights telemetry from the
portal. Click the filter control that shows when you mouse-over any of the charts. The following chart is plotting a
custom Request count KPI with filters on URL and Duration attributes. Validate your filters with the Stream
Preview section that shows a live feed of telemetry that matches the criteria you have specified at any point in
time.

You can monitor a value different from Count. The options depend on the type of stream, which could be any
Application Insights telemetry: requests, dependencies, exceptions, traces, events, or metrics. It can be your own
custom measurement:

In addition to Application Insights telemetry, you can also monitor any Windows performance counter by
selecting that from the stream options, and providing the name of the performance counter.
Live metrics are aggregated at two points: locally on each server, and then across all servers. You can change the
default at either by selecting other options in the respective drop-downs.

Sample Telemetry: Custom Live Diagnostic Events


By default, the live feed of events shows samples of failed requests and dependency calls, exceptions, events, and
traces. Click the filter icon to see the applied criteria at any point in time.
As with metrics, you can specify any arbitrary criteria to any of the Application Insights telemetry types. In this
example, we are selecting specific request failures, traces, and events. We are also selecting all exceptions and
dependency failures.

Note: Currently, for Exception message-based criteria, use the outermost exception message. In the preceding
example, to filter out the benign exception with inner exception message (follows the "<--" delimiter) "The client
disconnected." use a message not-contains "Error reading request content" criteria.
See the details of an item in the live feed by clicking it. You can pause the feed either by clicking Pause or simply
scrolling down, or clicking an item. Live feed will resume after you scroll back to the top, or by clicking the counter
of items collected while it was paused.
Filter by server instance
If you want to monitor a particular server role instance, you can filter by server.
SDK Requirements
Custom Live Metrics Stream is available with version 2.4.0-beta2 or newer of Application Insights SDK for web.
Remember to select "Include Prerelease" option from NuGet package manager.

Secure the control channel


The custom filters criteria you specify are sent back to the Live Metrics component in the Application Insights SDK.
The filters could potentially contain sensitive information such as customerIDs. You can make the channel secure
with a secret API key in addition to the instrumentation key.
Create an API Key
Add API key to Configuration
In the [Link] file, add the AuthenticationApiKey to the QuickPulseTelemetryModule:

<Add
Type="[Link],
[Link]">
<AuthenticationApiKey>YOUR-API-KEY-HERE</AuthenticationApiKey>
</Add>

Or in code, set it on the QuickPulseTelemetryModule:

[Link] = "YOUR-API-KEY-HERE";

However, if you recognize and trust all the connected servers, you can try the custom filters without the
authenticated channel. This option is available for six months. This override is required once every new session, or
when a new server comes online.
NOTE
We strongly recommend that you set up the authenticated channel before entering potentially sensitive information like
CustomerID in the filter criteria.

Generating a performance test load


If you want to watch the effect of a load increase, use the Performance Test blade. It simulates requests from a
number of simultaneous users. It can run either "manual tests" (ping tests) of a single URL, or it can run a multi-
step web performance test that you upload (in the same way as an availability test).

TIP
After you create the performance test, open the test and the Live Stream blade in separate windows. You can see when the
queued performance test starts, and watch live stream at the same time.

Troubleshooting
No data? If your application is in a protected network: Live Metrics Stream uses a different IP addresses than other
Application Insights telemetry. Make sure those IP addresses are open in your firewall.

Next steps
Monitoring usage with Application Insights
Using Diagnostic Search
Profiler
Snapshot debugger
Application Map in Application Insights
11/1/2017 • 3 min to read • Edit Online

In Azure Application Insights, Application Map is a visual layout of the dependency relationships of your
application components. Each component shows KPIs such as load, performance, failures, and alerts, to help you
discover any component causing a performance issue or failure. You can click through from any component to
more detailed diagnostics, such as Application Insights events. If your app uses Azure services, you can also click
through to Azure diagnostics, such as SQL Database Advisor recommendations.
Like other charts, you can pin an application map to the Azure dashboard, where it is fully functional.

Open the application map


Open the map from the overview blade for your application:

The map shows:


Availability tests
Client-side component (monitored with the JavaScript SDK)
Server-side component
Dependencies of the client and server components
You can expand and collapse dependency link groups:

If you have many dependencies of one type (SQL, HTTP etc.), they may appear grouped.

Spot problems
Each node has relevant performance indicators, such as the load, performance, and failure rates for that
component.
Warning icons highlight possible problems. An orange warning means there are failures in requests, page views or
dependency calls. Red means a failure rate above 5%. If you want to adjust these thresholds, open Options.
Active alerts also show up:

If you use SQL Azure, there's an icon that shows when there are recommendations on how you can improve
performance.

Click any icon to get more details:


Diagnostic click through
Each of the nodes on the map offers targeted click through for diagnostics. The options vary depending on the type
of the node.

For components that are hosted in Azure, the options include direct links to them.

Filters and time range


By default, the map summarizes all the data available for the chosen time range. But you can filter it to include only
specific operation names or dependencies.
Operation name: This includes both page views and server-side request types. With this option, the map shows
the KPI on the server/client-side node for the selected operations only. It shows the dependencies called in the
context of those specific operations.
Dependency base name: This includes the AJAX browser dependencies and server-side dependencies. If you
report custom dependency telemetry with the TrackDependency API, they also appear here. You can select the
dependencies to show on the map. Currently this selection does not filter the server-side requests, or the client-
side page views.

Save filters
To save the filters you have applied, pin the filtered view onto a dashboard.

Error pane
When you click a node in the map, an error pane is displayed on the right-hand side summarizing failures for that
node. Failures are grouped first by operation ID and then grouped by problem ID.
Clicking on a failure takes you to the most recent instance of that failure.

Resource health
For some resource types, resource health is displayed at the top of the error pane. For example, clicking a SQL
node will show the database health and any alerts that have fired.

You can click the resource name to view standard overview metrics for that resource.

End-to-end system app maps


Requires SDK version 2.3 or higher
If your application has several components - for example, a back-end service in addition to the web app - then you
can show them all on one integrated app map.
The app map finds server nodes by following any HTTP dependency calls made between servers with the
Application Insights SDK installed. Each Application Insights resource is assumed to contain one server.
Multi-role app map (preview)
The preview multi-role app map feature allows you to use the app map with multiple servers sending data to the
same Application Insights resource / instrumentation key. Servers in the map are segmented by the
cloud_RoleName property on telemetry items. Set Multi-role Application Map to On from the Previews blade to
enable this configuration.
This approach may be desired in a micro-services application, or in other scenarios where you want to correlate
events across multiple servers within a single Application Insights resource.

Video

Feedback
Please provide feedback through the portal feedback option.

Next steps
Azure portal
Exploring HockeyApp data in Application Insights
11/15/2017 • 2 min to read • Edit Online

NOTE
Visual Studio App Center is now the recommended service from Microsoft for monitoring new mobile apps. Learn how to set
up your apps with App Center and Application Insights.

HockeyApp is a service for monitoring live desktop and mobile apps. From HockeyApp, you can send custom and
trace telemetry to monitor usage and assist in diagnosis (in addition to getting crash data). This stream of telemetry
can be queried using the powerful Analytics feature of Azure Application Insights. In addition, you can export the
custom and trace telemetry. To enable these features, you set up a bridge that relays HockeyApp custom data to
Application Insights.

The HockeyApp Bridge app


The HockeyApp Bridge App is the core feature that enables you to access your HockeyApp custom and trace
telemetry in Application Insights through the Analytics and Continuous Export features. Custom and trace events
collected by HockeyApp after the creation of the HockeyApp Bridge App will be accessible from these features. Let’s
see how to set up one of these Bridge Apps.
In HockeyApp, open Account Settings, API Tokens. Either create a new token or reuse an existing one. The minimum
rights required are "read only". Take a copy of the API token.

Open the Microsoft Azure portal and create an Application Insights resource. Set Application Type to “HockeyApp
bridge application”:
You don't need to set a name - this will automatically be set from the HockeyApp name.
The HockeyApp bridge fields appear.

Enter the HockeyApp token you noted earlier. This action populates the “HockeyApp Application” dropdown menu
with all your HockeyApp applications. Select the one you want to use, and complete the remainder of the fields.
Open the new resource.
Note that the data takes a while to start flowing.
That’s it! Custom and trace data collected in your HockeyApp-instrumented app from this point forward is now
also available to you in the Analytics and Continuous Export features of Application Insights.
Let’s briefly review each of these features now available to you.

Analytics
Analytics is a powerful tool for ad-hoc querying of your data, allowing you to diagnose and analyze your telemetry
and quickly discover root causes and patterns.

Learn more about Analytics

Continuous export
Continuous Export allows you to export your data into an Azure Blob Storage container. This is very useful if you
need to keep your data for longer than the retention period currently offered by Application Insights. You can keep
the data in blob storage, process it into a SQL Database, or your preferred data warehousing solution.
Learn more about Continuous Export

Next steps
Apply Analytics to your data
Debug your applications with Azure Application
Insights in Visual Studio
11/1/2017 • 3 min to read • Edit Online

In Visual Studio (2015 and later), you can analyze performance and diagnose issues in your [Link] web app
both in debugging and in production, using telemetry from Azure Application Insights.
If you created your [Link] web app using Visual Studio 2017 or later, it already has the Application Insights SDK.
Otherwise, if you haven't done so already, add Application Insights to your app.
To monitor your app when it's in live production, you normally view the Application Insights telemetry in the
Azure portal, where you can set alerts and apply powerful monitoring tools. But for debugging, you can also
search and analyze the telemetry in Visual Studio. You can use Visual Studio to analyze telemetry both from your
production site and from debugging runs on your development machine. In the latter case, you can analyze
debugging runs even if you haven't yet configured the SDK to send telemetry to the Azure portal.

Debug your project


Run your web app in local debug mode by using F5. Open different pages to generate some telemetry.
In Visual Studio, you see a count of the events that have been logged by the Application Insights module in your
project.

Click this button to search your telemetry.

Application Insights search


The Application Insights Search window shows events that have been logged. (If you signed in to Azure when you
set up Application Insights, you can search the same events in the Azure portal.)
NOTE
After you select or deselect filters, click the Search button at the end of the text search field.

The free text search works on any fields in the events. For example, search for part of the URL of a page; or the
value of a property such as client city; or specific words in a trace log.
Click any event to see its detailed properties.
For requests to your web app, you can click through to the code.

You can also open related items to help diagnose failed requests or exceptions.
View exceptions and failed requests
Exception reports show in the Search window. (In some older types of [Link] application, you have to set up
exception monitoring to see exceptions that are handled by the framework.)
Click an exception to get a stack trace. If the code of the app is open in Visual Studio, you can click through from
the stack trace to the relevant line of the code.

View request and exception summaries in the code


In the Code Lens line above each handler method, you see a count of the requests and exceptions logged by
Application Insights in the past 24 h.
NOTE
Code Lens shows Application Insights data only if you have configured your app to send telemetry to the Application
Insights portal.

More about Application Insights in Code Lens

Trends
Trends is a tool for visualizing how your app behaves over time.
Choose Explore Telemetry Trends from the Application Insights toolbar button or Application Insights Search
window. Choose one of five common queries to get started. You can analyze different datasets based on telemetry
types, time ranges, and other properties.
To find anomalies in your data, choose one of the anomaly options under the "View Type" dropdown. The filtering
options at the bottom of the window make it easy to hone in on specific subsets of your telemetry.

More about Trends.

Local monitoring
(From Visual Studio 2015 Update 2) If you haven't configured the SDK to send telemetry to the Application
Insights portal (so that there is no instrumentation key in [Link]) then the diagnostics window
displays telemetry from your latest debugging session.
This is desirable if you have already published a previous version of your app. You don't want the telemetry from
your debugging sessions to be mixed up with the telemetry on the Application Insights portal from the published
app.
It's also useful if you have some custom telemetry that you want to debug before sending telemetry to the portal.
At first, I fully configured Application Insights to send telemetry to the portal. But now I'd like to see the
telemetry only in Visual Studio.
In the Search window's Settings, there's an option to search local diagnostics even if your app sends
telemetry to the portal.
To stop telemetry being sent to the portal, comment out the line <instrumentationkey>... from
[Link]. When you're ready to send telemetry to the portal again, uncomment it.

Next steps

Add more data


Monitor usage, availability, dependencies, exceptions.
Integrate traces from logging frameworks. Write custom
telemetry.

Working with the Application Insights portal


View dashboards, powerful diagnostic and analytic tools,
alerts, a live dependency map of your application, and
exported telemetry data.
Analyzing Trends in Visual Studio
1/3/2018 • 4 min to read • Edit Online

The Application Insights Trends tool visualizes how your web application's important telemetry events change over
time, helping you quickly identify problems and anomalies. By linking you to more detailed diagnostic information,
Trends can help you improve your app's performance, track down the causes of exceptions, and uncover insights
from your custom events.

Configure your web app for Application Insights


If you haven't done this already, configure your web app for Application Insights. This allows it to send telemetry to
the Application Insights portal. The Trends tool reads the telemetry from there.
Application Insights Trends is available in Visual Studio 2015 Update 3 and later.

Open Application Insights Trends


To open the Application Insights Trends window:
From the Application Insights toolbar button, choose Explore Telemetry Trends, or
From the project context menu, choose Application Insights > Explore Telemetry Trends, or
From the Visual Studio menu bar, choose View > Other Windows > Application Insights Trends.
You may see a prompt to select a resource. Click Select a resource, sign in with an Azure subscription, then
choose an Application Insights resource from the list for which you'd like to analyze telemetry trends.

Choose a trend analysis


Get started by choosing from one of five common trend analyses, each analyzing data from the last 24 hours:
Investigate performance issues with your server requests - Requests made to your service, grouped by
response times
Analyze errors in your server requests - Requests made to your service, grouped by HTTP response code
Examine the exceptions in your application - Exceptions from your service, grouped by exception type
Check the performance of your application's dependencies - Services called by your service, grouped by
response times
Inspect your custom events - Custom events you've set up for your service, grouped by event type.
These pre-built analyses are available later from the View common types of telemetry analysis button in the
upper-left corner of the Trends window.

Visualize trends in your application


Application Insights Trends creates a time series visualization from your app's telemetry. Each time series
visualization displays one type of telemetry, grouped by one property of that telemetry, over some time range. For
example, you might want to view server requests, grouped by the country from which they originated, over the last
24 hours. In this example, each bubble on the visualization would represent a count of the server requests for some
country/region during one hour.
Use the controls at the top of the window to adjust what types of telemetry you view. First, choose the telemetry
types in which you're interested:
Telemetry Type - Server requests, exceptions, dependencies, or custom events
Time Range - Anywhere from the last 30 minutes to the last 3 days
Group By - Exception type, problem ID, country/region, and more.
Then, click Analyze Telemetry to run the query.
To navigate between bubbles in the visualization:
Click to select a bubble, which updates the filters at the bottom of the window, summarizing just the events that
occurred during a specific time period
Double-click a bubble to navigate to the Search tool and see all of the individual telemetry events that occured
during that time period
Ctrl-click a bubble to de-select it in the visualization.
TIP
The Trends and Search tools work together to help you pinpoint the causes of issues in your service among thousands of
telemetry events. For example, if one afternoon your customers notice your app is being less responsive, start with Trends.
Analyze requests made to your service over the past several hours, grouped by response time. See if there's an unusually
large cluster of slow requests. Then double click that bubble to go to the Search tool, filtered to those request events. From
Search, you can explore the contents of those requests and navigate to the code involved to resolve the issue.

Filter
Discover more specific trends with the filter controls at the bottom of the window. To apply a filter, click on its
name. You can quickly switch between different filters to discover trends that may be hiding in a particular
dimension of your telemetry. If you apply a filter in one dimension, like Exception Type, filters in other dimensions
remain clickable even though they appear grayed-out. To un-apply a filter, click it again. Ctrl-click to select multiple
filters in the same dimension.

What if you want to apply multiple filters?


1. Apply the first filter.
2. Click the Apply selected filters and query again button by the name of the dimension of your first filter. This
will re-query your telemetry for only events that match the first filter.
3. Apply a second filter.
4. Repeat the process to find trends in specific subsets of your telemetry. For example, server requests named
"GET Home/Index" and that came from Germany and that received a 500 response code.
To un-apply one of these filters, click the Remove selected filters and query again button for the dimension.

Find anomalies
The Trends tool can highlight bubbles of events that are anomalous compared to other bubbles in the same time
series. In the View Type dropdown, choose Counts in time bucket (highlight anomalies) or Percentages in
time bucket (highlight anomalies). Red bubbles are anomalous. Anomalies are defined as bubbles with
counts/percentages exceeding 2.1 times the standard deviation of the counts/percentages that occured in the past
two time periods (48 hours if you're viewing the last 24 hours, etc.).
TIP
Highlighting anomalies is especially helpful for finding outliers in time series of small bubbles that may otherwise look
similarly sized.

Next steps

Working with Application Insights in Visual Studio


Search telemetry, see data in CodeLens, and configure
Application Insights. All within Visual Studio.

Add more data


Monitor usage, availability, dependencies, exceptions.
Integrate traces from logging frameworks. Write custom
telemetry.
Working with the Application Insights portal
Dashboards, powerful diagnostic and analytic tools, alerts, a
live dependency map of your application, and telemetry
export.
Application Insights telemetry in Visual Studio
CodeLens
11/1/2017 • 3 min to read • Edit Online

Methods in the code of your web app can be annotated with telemetry about run-time exceptions and request
response times. If you install Azure Application Insights in your application, the telemetry appears in Visual Studio
CodeLens - the notes at the top of each function where you're used to seeing useful information such as the
number of places the function is referenced or the last person who edited it.

NOTE
Application Insights in CodeLens is available in Visual Studio 2015 Update 3 and later, or with the latest version of Developer
Analytics Tools extension. CodeLens is available in the Enterprise and Professional editions of Visual Studio.

Where to find Application Insights data


Look for Application Insights telemetry in the CodeLens indicators of the public request methods of your web
application. CodeLens indicators are shown above method and other declarations in C# and Visual Basic code. If
Application Insights data is available for a method, you'll see indicators for requests and exceptions such as "100
requests, 1% failed" or "10 exceptions." Click a CodeLens indicator for more details.

TIP
Application Insights request and exception indicators may take a few extra seconds to load after other CodeLens indicators
appear.

Exceptions in CodeLens

The exception CodeLens indicator shows the number of exceptions that have occurred in the past 24 hours from
the 15 most frequently occurring exceptions in your application during that period, while processing the request
served by the method.
To see more details, click the exceptions CodeLens indicator:
The percentage change in number of exceptions from the most recent 24 hours relative to the prior 24 hours
Choose Go to code to navigate to the source code for the function throwing the exception
Choose Search to query all instances of this exception that have occurred in the past 24 hours
Choose Trend to view a trend visualization for occurrences of this exception in the past 24 hours
Choose View all exceptions in this app to query all exceptions that have occurred in the past 24 hours
Choose Explore exception trends to view a trend visualization for all exceptions that have occurred in the past
24 hours.

TIP
If you see "0 exceptions" in CodeLens but you know there should be exceptions, check to make sure the right Application
Insights resource is selected in CodeLens. To select another resource, right-click on your project in the Solution Explorer and
choose Application Insights > Choose Telemetry Source. CodeLens is only shown for the 15 most frequently occurring
exceptions in your application in the past 24 hours, so if an exception is the 16th most frequently or less, you'll see "0
exceptions." Exceptions from [Link] views may not appear on the controller methods that generated those views.

TIP
If you see "? exceptions" in CodeLens, you need to associate your Azure account with Visual Studio or your Azure account
credential may have expired. In either case, click "? exceptions" and choose Add an account... to enter your credentials.

Requests in CodeLens

The request CodeLens indicator shows the number of HTTP requests that been serviced by a method in the past 24
hours, plus the percentage of those requests that failed.
To see more details, click the requests CodeLens indicator:
The absolute and percentage changes in number of requests, failed requests, and average response times over
the past 24 hours compared to the prior 24 hours
The reliability of the method, calculated as the percentage of requests that did not fail in the past 24 hours
Choose Search for requests or failed requests to query all the (failed) requests that occurred in the past 24
hours
Choose Trend to view a trend visualization for requests, failed requests, or average response times in the past
24 hours.
Choose the name of the Application Insights resource in the upper left corner of the CodeLens details view to
change which resource is the source for CodeLens data.

Next steps
Working with Application Insights in Visual Studio
Search telemetry, see data in CodeLens, and configure
Application Insights. All within Visual Studio.

Add more data


Monitor usage, availability, dependencies, exceptions.
Integrate traces from logging frameworks. Write custom
telemetry.

Working with the Application Insights portal


Dashboards, powerful diagnostic and analytic tools, alerts, a
live dependency map of your application, and telemetry
export.
Usage analysis with Application Insights
11/15/2017 • 6 min to read • Edit Online

Which features of your web or mobile app are most popular? Do your users achieve their goals with your app?
Do they drop out at particular points, and do they return later? Azure Application Insights helps you gain powerful
insights into how people use your app. Every time you update your app, you can assess how well it works for
users. With this knowledge, you can make data driven decisions about your next development cycles.

Send telemetry from your app


The best experience is obtained by installing Application Insights both in your app server code, and in your web
pages. The client and server components of your app send telemetry back to the Azure portal for analysis.
1. Server code: Install the appropriate module for your [Link], Azure, Java, [Link], or other app.
Don't want to install server code? Just create an Azure Application Insights resource.
2. Web page code: Open the Azure portal, open the Application Insights resource for your app, and then
open Getting Started > Monitor and Diagnose Client-Side.

3. Mobile app code: Use the App Center SDK to collect events from your app, then send copies of these
events to Application Insights for analysis by following this guide.
4. Get telemetry: Run your project in debug mode for a few minutes, and then look for results in the
Overview blade in Application Insights.
Publish your app to monitor your app's performance and find out what your users are doing with your
app.

Include user and session ID in your telemetry


To track users over time, Application Insights requires a way to identify them. The Events tool is the only Usage
tool that does not require a user ID or a session ID.
Start sending user and session IDs using this process.

Explore usage demographics and statistics


Find out when people use your app, what pages they're most interested in, where your users are located, what
browsers and operating systems they use.
The Users and Sessions reports filter your data by pages or custom events, and segment them by properties such
as location, environment, and page. You can also add your own filters.

Insights on the right point out interesting patterns in the set of data.
The Users report counts the numbers of unique users that access your pages within your chosen time periods.
For web apps, users are counted by using cookies. If someone accesses your site with different browsers or
client machines, or clears their cookies, then they will be counted more than once.
The Sessions report counts the number of user sessions that access your site. A session is a period of activity
by a user, terminated by a period of inactivity of more than half an hour.
More about the Users, Sessions, and Events tools

Page views
From the Usage blade, click through the Page Views tile to get a breakdown of your most popular pages:
The example above is from a games web site. From the charts, we can instantly see:
Usage hasn't improved in the past week. Maybe we should think about search engine optimization?
Tennis is the most popular game page. Let's focus on further improvements to this page.
On average, users visit the Tennis page about three times per week. (There are about three times more
sessions than users.)
Most users visit the site during the U.S. working week, and in working hours. Perhaps we should provide a
"quick hide" button on the web page.
The annotations on the chart show when new versions of the website were deployed. None of the recent
deployments had a noticeable effect on usage.
What if you want to investigate the traffic to your site in more detail, like splitting by a custom property your site
sends in its page view telemetry?
1. Open the Events tool in the Application Insights resource menu. This tool lets you analyze how many page
views and custom events were sent from your app, based on a variety of filtering, cohorting, and segmentation
options.
2. In the "Who used" dropdown, select "Any Page View".
3. In the "Split by" dropdown, select a property by which to split your page view telemetry.

Retention - how many users come back?


Retention helps you understand how often your users return to use their app, based on cohorts of users that
performed some business action during a certain time bucket.
Understand what specific features cause users to come back more than others
Form hypotheses based on real user data
Determine whether retention is a problem in your product

The retention controls on top allow you to define specific events and time range to calculate retention. The graph
in the middle gives a visual representation of the overall retention percentage by the time range specified. The
graph on the bottom represents individual retention in a given time period. This level of detail allows you to
understand what your users are doing and what might affect returning users on a more detailed granularity.
More about the Retention tool

Custom business events


To get a clear understanding of what users do with your app, it's useful to insert lines of code to log custom
events. These events can track anything from detailed user actions such as clicking specific buttons, to more
significant business events such as making a purchase or winning a game.
Although in some cases, page views can represent useful events, it isn't true in general. A user can open a product
page without buying the product.
With specific business events, you can chart your users' progress through your site. You can find out their
preferences for different options, and where they drop out or have difficulties. With this knowledge, you can make
informed decisions about the priorities in your development backlog.
Events can be logged from the client side of the app:

[Link]("ExpandDetailTab", {DetailTab: tabName});

Or from the server side:


var tc = new [Link]();
[Link]("CreatedAccount", new Dictionary<string,string> {"AccountType":[Link]}, null);
...
[Link]("AddedItemToCart", new Dictionary<string,string> {"Item":[Link]}, null);
...
[Link]("CompletedPurchase");

You can attach property values to these events, so that you can filter or split the events when you inspect them in
the portal. In addition, a standard set of properties is attached to each event, such as anonymous user ID, which
allows you to trace the sequence of activities of an individual user.
Learn more about custom events and properties.
Slice and dice events
In the Users, Sessions, and Events tools, you can slice and dice custom events by user, event name, and properties.

Design the telemetry with the app


When you are designing each feature of your app, consider how you are going to measure its success with your
users. Decide what business events you need to record, and code the tracking calls for those events into your app
from the start.

A | B Testing
If you don't know which variant of a feature will be more successful, release both of them, making each accessible
to different users. Measure the success of each, and then move to a unified version.
For this technique, you attach distinct property values to all the telemetry that is sent by each version of your app.
You can do that by defining properties in the active TelemetryContext. These default properties are added to every
telemetry message that the application sends - not just your custom messages, but the standard telemetry as
well.
In the Application Insights portal, filter and split your data on the property values, so as to compare the different
versions.
To do this, set up a telemetry initializer:
// Telemetry initializer class
public class MyTelemetryInitializer : ITelemetryInitializer
{
public void Initialize (ITelemetry telemetry)
{
[Link]["AppVersion"] = "v2.1";
}
}

In the web app initializer such as [Link]:

protected void Application_Start()


{
// ...
[Link]
.Add(new MyTelemetryInitializer());
}

All new TelemetryClients automatically add the property value you specify. Individual telemetry events can
override the default values.

Next steps
Users, Sessions, Events
Funnels
Retention
User Flows
Workbooks
Add user context
Send user context IDs to enable usage experiences in
Azure Application Insights
11/7/2017 • 2 min to read • Edit Online

Tracking users
Application Insights enables you to monitor and track your users through a set of product usage tools:
Users, Sessions, Events
Funnels
Retention
Cohorts
Workbooks
In order to track what a user does over time, Application Insights needs an ID for each user or session. Include the
following IDs in every custom event or page view.
Users, Funnels, Retention, and Cohorts: Include user ID.
Sessions: Include session ID.
If your app is integrated with the JavaScript SDK, user ID is tracked automatically.

Choosing user IDs


User IDs should persist across user sessions to track how users behave over time. There are various approaches
for persisting the ID.
A definition of a user that you already have in your service.
If the service has access to a browser, it can pass the browser a cookie with an ID in it. The ID will persist for as
long as the cookie remains in the user's browser.
If necessary, you can use a new ID each session, but the results about users will be limited. For example, you
won't be able to see how a user's behavior changes over time.
The ID should be a Guid or another string complex enough to identify each user uniquely. For example, it could be
a long random number.
If the ID contains personally identifying information about the user, it is not an appropriate value to send to
Application Insights as a user ID. You can send such an ID as an authenticated user ID, but it does not fulfill the user
ID requirement for usage scenarios.

[Link] apps: Setting the user context in an ITelemetryInitializer


Create a telemetry initializer, as described in detail here, and set the [Link] and the [Link].
This example sets the user ID to an identifier that expires after the session. If possible, use a user ID that persists
across sessions.
using System;
using [Link];
using [Link];
using [Link];

namespace [Link]
{
/*
* Custom TelemetryInitializer that sets the user ID.
*
*/
public class MyTelemetryInitializer : ITelemetryInitializer
{
public void Initialize(ITelemetry telemetry)
{
// For a full experience, track each user across sessions. For an incomplete view of user
// behavior within a session, store user ID on the HttpContext Session.
// Set the user ID if we haven't done so yet.
if ([Link]["UserId"] == null)
{
[Link]["UserId"] = [Link]();
}

// Set the user id on the Application Insights telemetry item.


[Link] = (string)[Link]["UserId"];

// Set the session id on the Application Insights telemetry item.


[Link] = [Link];
}
}
}

Next steps
To enable usage experiences, start sending custom events or page views.
If you already send custom events or page views, explore the Usage tools to learn how users use your service.
Usage overview
Users, Sessions, and Events
Funnels
Retention
Workbooks
Users, sessions, and events analysis in Application
Insights
11/27/2017 • 2 min to read • Edit Online

Find out when people use your web app, what pages they're most interested in, where your users are located,
what browsers and operating systems they use. Analyze business and usage telemetry by using Azure Application
Insights.

Get started
If you don't yet see data in the users, sessions, or events blades in the Application Insights portal, learn how to get
started with the usage tools.

The Users, Sessions, and Events segmentation tool


Three of the usage blades use the same tool to slice and dice telemetry from your web app from three
perspectives. By filtering and splitting the data, you can uncover insights about the relative usage of different
pages and features.
Users tool: How many people used your app and its features. Users are counted by using anonymous IDs
stored in browser cookies. A single person using different browsers or machines will be counted as more than
one user.
Sessions tool: How many sessions of user activity have included certain pages and features of your app. A
session is counted after half an hour of user inactivity, or after continuous 24h of use.
Events tool: How often certain pages and features of your app are used. A page view is counted when a
browser loads a page from your app, provided you have instrumented it.
A custom event represents one occurrence of something happening in your app, often a user interaction
like a button click or the completion of some task. You insert code in your app to generate custom events.
Querying for Certain Users
Explore different groups of users by adjusting the query options at the top of the Users tool:
Who used: Choose custom events and page views.
During: Choose a time range.
By: Choose how to bucket the data, either by a period of time or by another property such as browser or city.
Split By: Choose a property by which to split or segment the data.
Add Filters: Limit the query to certain users, sessions, or events based on their properties, such as browser or
city.

Saving and sharing reports


You can save Users reports, either private just to you in the My Reports section, or shared with everyone else with
access to this Application Insights resource in the Shared Reports section.
While saving a report or editing its properties, choose "Current Relative Time Range" to save a report will
continuously refreshed data, going back some fixed amount of time.
Choose "Current Absolute Time Range" to save a report with a fixed set of data. Keep in mind that data in
Application Insights is only stored for 90 days, so if more than 90 days have passed since a report with an
absolute time range was saved, the report will appear empty.

Example instances
The Example instances section shows information about a handful of individual users, sessions, or events that are
matched by the current query. Considering and exploring the behaviors of individuals, in addition to aggregates,
can provide insights about how people actually use your app.

Insights
The Insights sidebar shows large clusters of users that share common properties. These clusters can uncover
surprising trends in how people use your app. For example, if 40% of all of the usage of your app comes from
people using a single feature.

Next steps
To enable usage experiences, start sending custom events or page views.
If you already send custom events or page views, explore the Usage tools to learn how users use your service.
Funnels
Retention
User Flows
Workbooks
Add user context
Discover how customers are using your application
with Application Insights Funnels
11/28/2017 • 1 min to read • Edit Online

Understanding the customer experience is of the utmost importance to your business. If your application involves
multiple stages, you need to know if most customers are progressing through the entire process, or if they are
ending the process at some point. The progression through a series of steps in a web application is known as a
funnel. You can use Azure Application Insights Funnels to gain insights into your users, and monitor step-by-step
conversion rates.

Create your funnel


Before you create your funnel, decide on the question you want to answer. For example, you might want to know
how many users are viewing the home page, viewing a customer profile, and creating a ticket. In this example, the
owners of the Fabrikam Fiber company want to know the percentage of customers who successfully create a
customer ticket.
Here are the steps they take to create their funnel.
1. In the Application Insights Funnels tool, select New.
2. From the Time Range drop-down menu, select Last 90 days. Select either My funnels or Shared funnels.
3. From the Step 1 drop-down list, select Index.
4. From the Step 2 list, select Customer.
5. From the Step 3 list, select Create.
6. Add a name to the funnel, and select Save.
The following screenshot shows an example of the kind of data the Funnels tool generates. The Fabrikam owners
can see that during the last 90 days, 54.3 percent of their customers who visited the home page created a
customer ticket. They can also see that 2,700 of their customers came to the index from the home page. This might
indicate a refresh issue.
Funnels features
The preceding screenshot includes five highlighted areas. These are features of Funnels. The following list explains
more about each corresponding area in the screenshot:
1. If your app is sampled, you will see a sampling banner. Selecting the banner opens a context pane, explaining
how to turn sampling off.
2. You can export your funnel to Power BI.
3. Select a step to see more details on the right.
4. The historical conversion graph shows the conversion rates over the last 90 days.
5. Understand your users better by accessing the users tool. You can use filters in each step.

Next steps
Usage overview
Users, Sessions, and Events
Retention
Workbooks
Add user context
Export to Power BI
User retention analysis for web applications with
Application Insights
11/27/2017 • 2 min to read • Edit Online

The retention feature in Azure Application Insights helps you analyze how many users return to your app, and
how often they perform particular tasks or achieve goals. For example, if you run a game site, you could compare
the numbers of users who return to the site after losing a game with the number who return after winning. This
knowledge can help you improve both your user experience and your business strategy.

Get started
If you don't yet see data in the retention tool in the Application Insights portal, learn how to get started with the
usage tools.

The Retention tool

1. The toolbar allows users to create new retention reports, open existing retention reports, save current
retention report or save as, revert changes made to saved reports, refresh data on the report, share report via
email or direct link, and access the documentation page.
2. By default, retention shows all users who did anything then came back and did anything else over a period.
You can select different combination of events to narrow the focus on specific user activities.
3. Add one or more filters on properties. For example, you can focus on users in a particular country or region.
Click Update after setting the filters.
4. The overall retention chart shows a summary of user retention across the selected time period.
5. The grid shows the number of users retained according to the query builder in #2. Each row represents a
cohort of users who performed any event in the time period shown. Each cell in the row shows how many of
that cohort returned at least once in a later period. Some users may return in more than one period.
6. The insights cards show top five initiating events, and top five returned events to give users a better
understanding of their retention report.

Users can hover over cells on the retention tool to access the analytics button and tool tips explaining what the
cell means. The Analytics button takes users to the Analytics tool with a pre-populated query to generate users
from the cell.

Use business events to track retention


To get the most useful retention analysis, measure events that represent significant business activities.
For example, many users might open a page in your app without playing the game that it displays. Tracking just
the page views would therefore provide an inaccurate estimate of how many people return to play the game after
enjoying it previously. To get a clear picture of returning players, your app should send a custom event when a
user actually plays.
It's good practice to code custom events that represent key business actions, and use these for your retention
analysis. To capture the game outcome, you need to write a line of code to send a custom event to Application
Insights. If you write it in the web page code or in [Link], it looks like this:

[Link]("won game");

Or in [Link] server code:

[Link]("won game");

Learn more about writing custom events.

Next steps
To enable usage experiences, start sending custom events or page views.
If you already send custom events or page views, explore the Usage tools to learn how users use your service.
Users, Sessions, Events
Funnels
User Flows
Workbooks
Add user context
Analyze user navigation patterns with User Flows in
Application Insights
8/15/2017 • 5 min to read • Edit Online

The User Flows tool visualizes how users navigate between the pages and features of your site. It's great for
answering questions like:
How do users navigate away from a page on your site?
What do users click on a page on your site?
Where are the places that users churn most from your site?
Are there places where users repeat the same action over and over?
The User Flows tool starts from an initial page view or event that you specify. Given this page view or custom
event, User Flows shows the page views and custom events that users sent immediately afterwards during a
session, two steps afterwards, and so forth. Lines of varying thickness show how many times each path was
followed by users. Special "Session Ended" nodes show how many users sent no page views or custom events
after the preceding node, highlighting where users probably left your site.

NOTE
Your Application Insights resource must contain page views or custom events to use the User Flows tool. Learn how to set
up your app to collect page views automatically with the Application Insights JavaScript SDK.

Start by choosing an initial page view or custom event


To get started answering questions with the User Flows tool, choose an initial page view or custom event to serve
as the starting point for the visualization:
1. Click the link in the "What do users do after...?" title, or click the Edit button.
2. Select a page view or custom event from the "Initial event" dropdown.
3. Click "Create graph".
The "Step 1" column of the visualization shows what users did most frequently just after the initial event, ordered
top-to-bottom from most- to least-frequent. The "Step 2" and subsequent columns show what users did
thereafter, creating a picture of all the ways users have navigated through your site.
By default, the User Flows tool randomly samples only the last 24 hours of page views and custom event from
your site. You can increase the time range and change the balance of performance and accuracy for random
sampling in the Edit menu.
If some of the page views and custom events aren't relevant to you, click the "X" on the nodes you want to hide.
Once you've selected the nodes you want to hide, click the "Create graph" button below the visualization. To see all
of the nodes you've hidden, click the Edit button, then look at the "Excluded events" section.
If page views or custom events are missing that you expect to see on the visualization:
Check the "Excluded events" section in the Edit menu.
Use the "Detail level" control in the Edit menu to include less-frequent events in the visualization.
If the page view or custom event you expect is sent infrequently by users, try increasing the time range of the
visualization in the Edit menu.
Make sure the page view or custom event you expect is set up to be collected by the Application Insights SDK in
the source code of your site. Learn more about collecting custom events.
If you want to see more steps in the visualization, use the "Number of steps" slider in the Edit menu.

After visiting a page or feature, where do users go and what do they


click?
If your initial event is a page view, the first column ("Step 1") of the visualization is a quick way to understand what
users did immediately after visiting the page. Try opening your site in a window next to the User Flows
visualization. Compare your expectations of how users interact with the page to the list of events in the "Step 1"
column. Often, a UI element on the page that seems insignificant to your team can be among the most-used on
the page. It can be a great starting point for design improvements to your site.
If your initial event is a custom event, the first column shows what users did just after performing that action. As
with page views, consider if the observed behavior of your users matches your team's goals and expectations. If
your selected initial event is "Added Item to Shopping Cart", for example, look to see if "Go to Checkout" and
"Completed Purchase" appear in the visualization shortly thereafter. If user behavior is much different from your
expectations, use the visualization to understand how users are getting "trapped" by your site's current design.

Where are the places that users churn most from your site?
Watch for "Session Ended" nodes that appear high-up in a column in the visualization, especially early in a flow.
This means many users probably churned from your site after following the preceding path of pages and UI
interactions. Sometimes churn is expected - after completing a purchase on an eCommerce site, for example - but
usually churn is a sign of design problems, poor performance, or other issues with your site that can be improved.
Keep in mind that "Session Ended" nodes are based only on telemetry collected by this Application Insights
resource. If Application Insights doesn't receive telemetry for certain user interactions, users could still have
interacted with your site in those ways after the User Flows tool says the session ended.

Are there places where users repeat the same action over and over?
Look for a page view or custom event that is repeated by many users across subsequent steps in the visualization.
This usually means that users are performing repetitive actions on your site. If you find repetition, think about
changing the design of your site or adding new functionality to reduce repetition. For example, adding bulk edit
functionality if you find users performing repetitive actions on each row of a table element.

Common Questions
Why do steps appear disconnected?
If steps (columns) in a User Flows visualization are disconnected, it means none of the paths taken by users
between the steps were frequent enough to be shown. To show less frequent events on the visualization so the
steps appear connected, adjust the "Detail level" slider in the Edit menu.
Does the initial event represent the first time the event appears in a session, or any time it appears in a session?
The initial event on the visualization only represents the first time a user sent that page view or custom event
during a session. If users can send the initial event multiple times in a session, then the "Step 1" column only
shows how users behave after the first instance of initial event, not all instances.

Next steps
Usage overview
Users, Sessions, and Events
Retention
Adding custom events to your app
Investigate and share usage data with interactive
workbooks in Application Insights
11/8/2017 • 5 min to read • Edit Online

Workbooks combine Azure Application Insights data visualizations, Analytics queries, and text into interactive
documents. Workbooks are editable by other team members with access to the same Azure resource. This means
the queries and controls used to create a workbook are available to other people reading the workbook, making
them easy to explore, extend, and check for mistakes.
Workbooks are helpful for scenarios like:
Exploring the usage of your app when you don't know the metrics of interest in advance: numbers of users,
retention rates, conversion rates, etc. Unlike other usage analytics tools in Application Insights, workbooks let
you combine multiple kinds of visualizations and analyses, making them great for this kind of free-form
exploration.
Explaining to your team how a newly released feature is performing, by showing user counts for key
interactions and other metrics.
Sharing the results of an A/B experiment in your app with other members of your team. You can explain the
goals for the experiment with text, then show each usage metric and Analytics query used to evaluate the
experiment, along with clear call-outs for whether each metric was above- or below-target.
Reporting the impact of an outage on the usage of your app, combining data, text explanation, and a discussion
of next steps to prevent outages in the future.

NOTE
Your Application Insights resource must contain page views or custom events to use workbooks. Learn how to set up your
app to collect page views automatically with the Application Insights JavaScript SDK.

Editing, rearranging, cloning, and deleting workbook sections


A workbook is a made of sections: independently editable usage visualizations, charts, tables, text, or Analytics
query results.
To edit the contents of a workbook section, click the Edit button below and to the right of the workbook section.
1. When you're done editing a section, click Done Editing in the bottom left corner of the section.
2. To create a duplicate of a section, click the Clone this section icon. Creating duplicate sections is a great to
way to iterate on a query without losing previous iterations.
3. To move up a section in a workbook, click the Move up or Move down icon.
4. To remove a section permanently, click the Remove icon.

Adding usage data visualization sections


Workbooks offer four types of built-in usage analytics visualizations. Each answers a common question about the
usage of your app. To add tables and charts other than these four sections, add Analytics query sections (see
below).
To add a Users, Sessions, Events, or Retention section to your workbook, use the Add Users or other
corresponding button at the bottom of the workbook, or at the bottom of any section.

Users sections answer "How many users viewed some page or used some feature of my site?"
Sessions sections answer "How many sessions did users spend viewing some page or using some feature of my
site?"
Events sections answer "How many times did users view some page or use some feature of my site?"
Each of these three section types offers the same sets of controls and visualizations:
Learn more about editing Users, Sessions, and Events sections
Toggle the main chart, histogram grids, automatic insights, and sample users visualizations using the Show
Chart, Show Grid, Show Insights, and Sample of These Users checkboxes at the top of each section.
Retention sections answer "Of people who viewed some page or used some feature on one day or week, how
many came back in a subsequent day or week?"
Learn more about editing Retention sections
Toggle the optional Overall Retention chart using the Show overall retention chart checkbox at the top of
the section.

Adding Application Insights Analytics sections

To add an Application Insights Analytics query section to your workbook, use the Add Analytics query button at
the bottom of the workbook, or at the bottom of any section.
Analytics query sections let you add arbitrary queries over your Application Insights data into workbooks. This
flexibility means Analytics query sections should be your go-to for answering any questions about your site other
than the four listed above for Users, Sessions, Events, and Retention, like:
How many exceptions did your site throw during the same time period as a decline in usage?
What was the distribution of page load times for users viewing some page?
How many users viewed some set of pages on your site, but not some other set of pages? This can be useful to
understand if you have clusters of users who use different subsets of your site's functionality (use the join
operator with the kind=leftanti modifier in the Log Analytics query language).

Use the Log Analytics query language reference to learn more about writing queries.

Adding text and Markdown sections


Adding headings, explanations, and commentary to your workbooks helps turn a set of tables and charts into a
narrative. Text sections in workbooks support the Markdown syntax for text formatting, like headings, bold, italics,
and bulleted lists.
To add a text section to your workbook, use the Add text button at the bottom of the workbook, or at the bottom
of any section.

Saving and sharing workbooks with your team


Workbooks are saved within an Application Insights resource, either in the My Reports section that's private to
you or in the Shared Reports section that's accessible to everyone with access to the Application Insights
resource. To view all the workbooks in the resource, click the Open button in the action bar.
To share a workbook that's currently in My Reports:
1. Click Open in the action bar
2. Click the "..." button beside the workbook you want to share
3. Click Move to Shared Reports.
To share a workbook with a link or via email, click Share in the action bar. Keep in mind that recipients of the link
need access to this resource in the Azure portal to view the workbook. To make edits, recipients need at least
Contributor permissions for the resource.
To pin a link to a workbook to an Azure Dashboard:
1. Click Open in the action bar
2. Click the "..." button beside the workbook you want to pin
3. Click Pin to dashboard.

Next steps
Next steps
To enable usage experiences, start sending custom events or page views.
If you already send custom events or page views, explore the Usage tools to learn how users use your service.
Users, Sessions, Events
Funnels
Retention
User Flows
Add user context
Analytics in Application Insights
11/27/2017 • 1 min to read • Edit Online

Analytics is the powerful search and query tool of Application Insights. Analytics is a web tool so no setup is
required. If you've already configured Application Insights for one of your apps then you can analyze your
app's data by opening Analytics from your app's overview blade.

You can also use the Analytics playground which is a free demo environment with a lot of sample data.

Query data in Analytics


A typical query starts with a table name followed by a series of operators separated by | . For example, let's
find out how many requests our app received from different countries, during the last 3 hours:

requests
| where timestamp > ago(3h)
| summarize count() by client_CountryOrRegion
| render piechart

We start with the table name requests and add piped elements as needed. First we define a time filter to
review only records from the last 3 hours. We then count the number of records per country (that data is
found in the column client_CountryOrRegion). Finally, we render the results in a pie chart.
The language has many attractive features:
Filter your raw app telemetry by any fields, including your custom properties and metrics.
Join multiple tables – correlate requests with page views, dependency calls, exceptions and log traces.
Powerful statistical aggregations.
Immediate and powerful visualizations.
REST API that you can use to run queries programmatically, for example from PowerShell.
The full language reference details every command supported, and updates regularly.

Next steps
Get started with the Analytics portal
Get started writing queries
Review the SQL-users' cheat sheet for translations of the most common idioms.
Test drive Analytics on our playground if your app isn't sending data to Application Insights yet.
Watch the introductory video.
Import data into Analytics
11/1/2017 • 8 min to read • Edit Online

Import any tabular data into Analytics, either to join it with Application Insights telemetry from your app, or so that
you can analyze it as a separate stream. Analytics is a powerful query language well-suited to analyzing high-
volume timestamped streams of telemetry.
You can import data into Analytics using your own schema. It doesn't have to use the standard Application Insights
schemas such as request or trace.
You can import JSON or DSV (delimiter-separated values - comma, semicolon or tab) files.
There are three situations where importing to Analytics is useful:
Join with app telemetry. For example, you could import a table that maps URLs from your website to more
readable page titles. In Analytics, you can create a dashboard chart report that shows the ten most popular
pages in your website. Now it can show the page titles instead of the URLs.
Correlate your application telemetry with other sources such as network traffic, server data, or CDN log files.
Apply Analytics to a separate data stream. Application Insights Analytics is a powerful tool, that works well
with sparse, timestamped streams - much better than SQL in many cases. If you have such a stream from some
other source, you can analyze it with Analytics.
Sending data to your data source is easy.
1. (One time) Define the schema of your data in a 'data source'.
2. (Periodically) Upload your data to Azure storage, and call the REST API to notify us that new data is waiting for
ingestion. Within a few minutes, the data is available for query in Analytics.
The frequency of the upload is defined by you and how fast would you like your data to be available for queries. It
is more efficient to upload data in larger chunks, but not larger than 1GB.

NOTE
Got lots of data sources to analyze? Consider using logstash to ship your data into Application Insights.

Before you start


You need:
1. An Application Insights resource in Microsoft Azure.
If you want to analyze your data separately from any other telemetry, create a new Application Insights
resource.
If you're joining or comparing your data with telemetry from an app that is already set up with
Application Insights, then you can use the resource for that app.
Contributor or owner access to that resource.
2. Azure storage. You upload to Azure storage, and Analytics gets your data from there.
We recommend you create a dedicated storage account for your blobs. If your blobs are shared with
other processes, it takes longer for our processes to read your blobs.
Define your schema
Before you can import data, you must define a data source, which specifies the schema of your data. You can have
up to 50 data sources in a single Application Insights resource
1. Start the data source wizard. Use "Add new data source" button. Alternatively - click on settings button in
right upper corner and choose "Data Sources" in dropdown menu.

Provide a name for your new data source.


2. Define format of the files that you will upload.
You can either define the format manually, or upload a sample file.
If the data is in CSV format, the first row of the sample can be column headers. You can change the field
names in the next step.
The sample should include at least 10 rows or records of data.
Column or field names should have alphanumeric names (without spaces or punctuation).
3. Review the schema that the wizard has got. If it inferred the types from a sample, you might need to adjust
the inferred types of the columns.

(Optional.) Upload a schema definition. See the format below.


Select a Timestamp. All data in Analytics must have a timestamp field. It must have type datetime ,
but it doesn't have to be named 'timestamp'. If your data has a column containing a date and time in
ISO format, choose this as the timestamp column. Otherwise, choose "as data arrived", and the import
process will add a timestamp field.
4. Create the data source.
Schema definition file format
Instead of editing the schema in UI, you can load the schema definition from a file. The schema definition format is
as follows:
Delimited format

[
{"location": "0", "name": "RequestName", "type": "string"},
{"location": "1", "name": "timestamp", "type": "datetime"},
{"location": "2", "name": "IPAddress", "type": "string"}
]

JSON format

[
{"location": "$.name", "name": "name", "type": "string"},
{"location": "$.alias", "name": "alias", "type": "string"},
{"location": "$.room", "name": "room", "type": "long"}
]

Each column is identified by the location, name and type.


Location – For delimited file format it is the position of the mapped value. For JSON format, it is the jpath of the
mapped key.
Name – the displayed name of the column.
Type – the data type of that column.
In case a sample data was used and file format is delimited, the schema definition must map all columns and add
new columns at the end.
JSON allows partial mapping of the data, therefore the schema definition of JSON format doesn’t have to map
every key which is found in a sample data. It can also map columns which are not part of the sample data.

Import data
To import data, you upload it to Azure storage, create an access key for it, and then make a REST API call.

You can perform the following process manually, or set up an automated system to do it at regular intervals. You
need to follow these steps for each block of data you want to import.
1. Upload the data to Azure blob storage.
Blobs can be any size up to 1GB uncompressed. Large blobs of hundreds of MB are ideal from a
performance perspective.
You can compress it with Gzip to improve upload time and latency for the data to be available for query.
Use the .gz filename extension.
It's best to use a separate storage account for this purpose, to avoid calls from different services slowing
performance.
When sending data in high frequency, every few seconds, it is recommended to use more than one
storage account, for performance reasons.
2. Create a Shared Access Signature key for the blob. The key should have an expiration period of one day and
provide read access.
3. Make a REST call to notify Application Insights that data is waiting.
Endpoint: [Link]
HTTP method: POST
Payload:

{
"data": {
"baseType":"OpenSchemaData",
"baseData":{
"ver":"2",
"blobSasUri":"<Blob URI with Shared Access Key>",
"sourceName":"<Schema ID>",
"sourceVersion":"1.0"
}
},
"ver":1,
"name":"[Link]",
"time":"<DateTime>",
"iKey":"<instrumentation key>"
}

The placeholders are:


Blob URI with Shared Access Key : You get this from the procedure for creating a key. It is specific to the blob.
Schema ID : The schema ID generated for your defined schema. The data in this blob should conform to the
schema.
DateTime : The time at which the request is submitted, UTC. We accept these formats: ISO8601 (like "2016-01-
01 [Link]"); RFC822 ("Wed, 14 Dec 16 [Link] +0000"); RFC850 ("Wednesday, 14-Dec-16 [Link] UTC");
RFC1123 ("Wed, 14 Dec 2016 [Link] +0000").
Instrumentation key of your Application Insights resource.

The data is available in Analytics after a few minutes.

Error responses
400 bad request: indicates that the request payload is invalid. Check:
Correct instrumentation key.
Valid time value. It should be the time now in UTC.
JSON of the event conforms to the schema.
403 Forbidden: The blob you've sent is not accessible. Make sure that the shared access key is valid and has not
expired.
404 Not Found:
The blob doesn't exist.
The sourceId is wrong.
More detailed information is available in the response error message.

Sample code
This code uses the [Link] NuGet package.
Classes

namespace IngestionClient
{
using System;
using [Link];

public class AnalyticsDataSourceIngestionRequest


{
#region Members
private const string BaseDataRequiredVersion = "2";
private const string RequestName = "[Link]";
#endregion Members

public AnalyticsDataSourceIngestionRequest(string ikey, string schemaId, string blobSasUri, int version


= 1)
{
Ver = version;
IKey = ikey;
Data = new Data
{
BaseData = new BaseData
{
Ver = BaseDataRequiredVersion,
BlobSasUri = blobSasUri,
SourceName = schemaId,
SourceVersion = [Link]()
}
};
}

[JsonProperty("data")]
public Data Data { get; set; }

[JsonProperty("ver")]
public int Ver { get; set; }

[JsonProperty("name")]
public string Name { get { return RequestName; } }

[JsonProperty("time")]
public DateTime Time { get { return [Link]; } }

[JsonProperty("iKey")]
public string IKey { get; set; }
}

#region Internal Classes

public class Data


{
private const string DataBaseType = "OpenSchemaData";
[JsonProperty("baseType")]
public string BaseType
{
get { return DataBaseType; }
}

[JsonProperty("baseData")]
public BaseData BaseData { get; set; }
}

public class BaseData


{
[JsonProperty("ver")]
public string Ver { get; set; }

[JsonProperty("blobSasUri")]
public string BlobSasUri { get; set; }

[JsonProperty("sourceName")]
public string SourceName { get; set; }

[JsonProperty("sourceVersion")]
public string SourceVersion { get; set; }
}

#endregion Internal Classes


}

namespace IngestionClient
{
using System;
using [Link];
using [Link];
using [Link];
using [Link];
using [Link];

public class AnalyticsDataSourceClient


{
#region Members
private readonly Uri endpoint = new Uri("[Link]
private const string RequestContentType = "application/json; charset=UTF-8";
private const string RequestAccess = "application/json";
#endregion Members

#region Public

public async Task<bool> RequestBlobIngestion(AnalyticsDataSourceIngestionRequest ingestionRequest)


{
HttpWebRequest request = [Link](endpoint);
[Link] = [Link];
[Link] = RequestContentType;
[Link] = RequestAccess;

string notificationJson = Serialize(ingestionRequest);


byte[] notificationBytes = GetContentBytes(notificationJson);
[Link] = [Link];

Stream requestStream = [Link]();


[Link](notificationBytes, 0, [Link]);
[Link]();

try
{
using (var response = (HttpWebResponse)await [Link]())
{
return [Link] == [Link];
}
}
catch (WebException e)
{
HttpWebResponse httpResponse = [Link] as HttpWebResponse;
if (httpResponse != null)
{
[Link](
"Ingestion request failed with status code: {0}. Error: {1}",
[Link],
[Link]);
return false;
}
throw;
}
}
#endregion Public

#region Private
private byte[] GetContentBytes(string content)
{
return [Link](content);
}

private string Serialize(AnalyticsDataSourceIngestionRequest ingestionRequest)


{
return [Link](ingestionRequest);
}
#endregion Private
}
}

Ingest data
Use this code for each blob.

AnalyticsDataSourceClient client = new AnalyticsDataSourceClient();

var ingestionRequest = new AnalyticsDataSourceIngestionRequest("iKey", "sourceId", "blobUrlWithSas");

bool success = await [Link](ingestionRequest);

Next steps
Tour of the Log Analytics query language
If you're using Logstash, use the Logstash plugin to send data to Application Insights
Create Application Insights resources using
PowerShell
11/1/2017 • 8 min to read • Edit Online

This article shows you how to automate the creation and update of Application Insights resources automatically
by using Azure Resource Management. You might, for example, do so as part of a build process. Along with the
basic Application Insights resource, you can create availability web tests, set up alerts, set the pricing scheme, and
create other Azure resources.
The key to creating these resources is JSON templates for Azure Resource Manager. In a nutshell, the procedure
is: download the JSON definitions of existing resources; parameterize certain values such as names; and then run
the template whenever you want to create a new resource. You can package several resources together, to create
them all in one go - for example, an app monitor with availability tests, alerts, and storage for continuous export.
There are some subtleties to some of the parameterizations, which we'll explain here.

One-time setup
If you haven't used PowerShell with your Azure subscription before:
Install the Azure Powershell module on the machine where you want to run the scripts:
1. Install Microsoft Web Platform Installer (v5 or higher).
2. Use it to install Microsoft Azure Powershell.

Create an Azure Resource Manager template


Create a new .json file - let's call it [Link] in this example. Copy this content into it:

{
"$schema": "[Link]
"contentVersion": "[Link]",
"parameters": {
"appName": {
"type": "string",
"metadata": {
"description": "Enter the application name."
}
},
"appType": {
"type": "string",
"defaultValue": "web",
"allowedValues": [
"web",
"java",
"HockeyAppBridge",
"other"
],
"metadata": {
"description": "Enter the application type."
}
},
"appLocation": {
"type": "string",
"defaultValue": "East US",
"allowedValues": [
"South Central US",
"West Europe",
"East US",
"North Europe"
],
"metadata": {
"description": "Enter the application location."
}
},
"priceCode": {
"type": "int",
"defaultValue": 1,
"allowedValues": [
1,
2
],
"metadata": {
"description": "1 = Basic, 2 = Enterprise"
}
},
"dailyQuota": {
"type": "int",
"defaultValue": 100,
"minValue": 1,
"metadata": {
"description": "Enter daily quota in GB."
}
},
"dailyQuotaResetTime": {
"type": "int",
"defaultValue": 24,
"metadata": {
"description": "Enter daily quota reset hour in UTC (0 to 23). Values outside the range
will get a random reset hour."
}
},
"warningThreshold": {
"type": "int",
"defaultValue": 90,
"minValue": 1,
"maxValue": 100,
"metadata": {
"description": "Enter the % value of daily quota after which warning mail to be sent. "
}
}
},
"variables": {
"priceArray": [
"Basic",
"Application Insights Enterprise"
],
"pricePlan": "[take(variables('priceArray'),parameters('priceCode'))]",
"billingplan": "[concat(parameters('appName'),'/', variables('pricePlan')[0])]"
},
"resources": [
{
"type": "[Link]/components",
"kind": "[parameters('appType')]",
"name": "[parameters('appName')]",
"apiVersion": "2014-04-01",
"location": "[parameters('appLocation')]",
"tags": {},
"properties": {
"ApplicationId": "[parameters('appName')]"
},
"dependsOn": []
},
{
"name": "[variables('billingplan')]",
"type": "[Link]/components/CurrentBillingFeatures",
"type": "[Link]/components/CurrentBillingFeatures",
"location": "[parameters('appLocation')]",
"apiVersion": "2015-05-01",
"dependsOn": [
"[resourceId('[Link]/components', parameters('appName'))]"
],
"properties": {
"CurrentBillingFeatures": "[variables('pricePlan')]",
"DataVolumeCap": {
"Cap": "[parameters('dailyQuota')]",
"WarningThreshold": "[parameters('warningThreshold')]",
"ResetTime": "[parameters('dailyQuotaResetTime')]"
}
}
}
]
}

Create Application Insights resources


1. In PowerShell, sign in to Azure:
Login-AzureRmAccount

2. Run a command like this:

New-AzureRmResourceGroupDeployment -ResourceGroupName Fabrikam `


-TemplateFile .\[Link] `
-appName myNewApp

-ResourceGroupNameis the group where you want to create the new resources.
-TemplateFile must occur before the custom parameters.
-appName The name of the resource to create.

You can add other parameters - you'll find their descriptions in the parameters section of the template.

To get the instrumentation key


After creating an application resource, you'll want the instrumentation key:

$resource = Find-AzureRmResource -ResourceNameEquals "<YOUR APP NAME>" -ResourceType


"[Link]/components"
$details = Get-AzureRmResource -ResourceId $[Link]
$ikey = $[Link]

Set the price plan


You can set the price plan.
To create an app resource with the Enterprise price plan, using the template above:

New-AzureRmResourceGroupDeployment -ResourceGroupName Fabrikam `


-TemplateFile .\[Link] `
-priceCode 2 `
-appName myNewApp
PRICECODE PLAN

1 Basic

2 Enterprise

If you only want to use the default Basic price plan, you can omit the CurrentBillingFeatures resource from the
template.
If you want to change the price plan after the component resource has been created, you can use a template
that omits the "[Link]/components" resource. Also, omit the dependsOn node from the billing
resource.
To verify the updated price plan, look at the "Features+pricing" blade in the browser. Refresh the browser view
to make sure you see the latest state.

Add a metric alert


To set up a metric alert at the same time as your app resource, merge code like this into the template file:
{
parameters: { ... // existing parameters ...
"responseTime": {
"type": "int",
"defaultValue": 3,
"minValue": 1,
"metadata": {
"description": "Enter response time threshold in seconds."
}
},
variables: { ... // existing variables ...
// Alert names must be unique within resource group.
"responseAlertName": "[concat('ResponseTime-', toLower(parameters('appName')))]"
},
resources: { ... // existing resources ...
{
//
// Metric alert on response time
//
"name": "[variables('responseAlertName')]",
"type": "[Link]/alertrules",
"apiVersion": "2014-04-01",
"location": "[parameters('appLocation')]",
// Ensure this resource is created after the app resource:
"dependsOn": [
"[resourceId('[Link]/components', parameters('appName'))]"
],
"tags": {
"[concat('hidden-link:', resourceId('[Link]/components', parameters('appName')))]":
"Resource"
},
"properties": {
"name": "[variables('responseAlertName')]",
"description": "response time alert",
"isEnabled": true,
"condition": {
"$type": "[Link],
[Link]",
"[Link]": "[Link]",
"dataSource": {
"$type": "[Link],
[Link]",
"[Link]": "[Link]",
"resourceUri": "[resourceId('[Link]/components', parameters('appName'))]",
"metricName": "[Link]"
},
"threshold": "[parameters('responseTime')]", //seconds
"windowSize": "PT15M" // Take action if changed state for 15 minutes
},
"actions": [
{
"$type": "[Link],
[Link]",
"[Link]": "[Link]",
"sendToServiceOwners": true,
"customEmails": []
}
]
}
}
}

When you invoke the template, you can optionally add this parameter:
`-responseTime 2`

You can of course parameterize other fields.


To find out the type names and configuration details of other alert rules, create a rule manually and then inspect
it in Azure Resource Manager.

Add an availability test


This example is for a ping test (to test a single page).
There are two parts in an availability test: the test itself, and the alert that notifies you of failures.
Merge the following code into the template file that creates the app.

{
parameters: { ... // existing parameters here ...
"pingURL": { "type": "string" },
"pingText": { "type": "string" , defaultValue: ""}
},
variables: { ... // existing variables here ...
"pingTestName":"[concat('PingTest-', toLower(parameters('appName')))]",
"pingAlertRuleName": "[concat('PingAlert-', toLower(parameters('appName')), '-',
subscription().subscriptionId)]"
},
resources: { ... // existing resources here ...
{ //
// Availability test: part 1 configures the test
//
"name": "[variables('pingTestName')]",
"type": "[Link]/webtests",
"apiVersion": "2014-04-01",
"location": "[parameters('appLocation')]",
// Ensure this is created after the app resource:
"dependsOn": [
"[resourceId('[Link]/components', parameters('appName'))]"
],
"tags": {
"[concat('hidden-link:', resourceId('[Link]/components', parameters('appName')))]":
"Resource"
},
"properties": {
"Name": "[variables('pingTestName')]",
"Description": "Basic ping test",
"Enabled": true,
"Frequency": 900, // 15 minutes
"Timeout": 120, // 2 minutes
"Kind": "ping", // single URL test
"RetryEnabled": true,
"Locations": [
{
"Id": "us-va-ash-azr"
},
{
"Id": "emea-nl-ams-azr"
},
{
"Id": "apac-jp-kaw-edge"
}
],
"Configuration": {
"WebTest": "[concat('<WebTest Name=\"', variables('pingTestName'), '\" Enabled=\"True\"
CssProjectStructure=\"\" CssIteration=\"\" Timeout=\"120\" WorkItemIds=\"\"
xmlns=\"[Link] Description=\"\"
CredentialUserName=\"\" CredentialPassword=\"\" PreAuthenticate=\"True\" Proxy=\"default\"
CredentialUserName=\"\" CredentialPassword=\"\" PreAuthenticate=\"True\" Proxy=\"default\"
StopOnError=\"False\" RecordedResultFile=\"\" ResultsLocale=\"\"> <Items> <Request Method=\"GET\"
Version=\"1.1\" Url=\"', parameters('Url'), '\" ThinkTime=\"0\" Timeout=\"300\"
ParseDependentRequests=\"True\" FollowRedirects=\"True\" RecordResult=\"True\" Cache=\"False\"
ResponseTimeGoal=\"0\" Encoding=\"utf-8\" ExpectedHttpStatusCode=\"200\" ExpectedResponseUrl=\"\"
ReportingName=\"\" IgnoreHttpStatusCode=\"False\" /> </Items> <ValidationRules> <ValidationRule
Classname=\"[Link],
[Link], Version=[Link], Culture=neutral,
PublicKeyToken=b03f5f7f11d50a3a\" DisplayName=\"Find Text\" Description=\"Verifies the existence of
the specified text in the response.\" Level=\"High\" ExectuionOrder=\"BeforeDependents\">
<RuleParameters> <RuleParameter Name=\"FindText\" Value=\"', parameters('pingText'), '\" />
<RuleParameter Name=\"IgnoreCase\" Value=\"False\" /> <RuleParameter Name=\"UseRegularExpression\"
Value=\"False\" /> <RuleParameter Name=\"PassIfTextFound\" Value=\"True\" /> </RuleParameters>
</ValidationRule> </ValidationRules> </WebTest>')]"
},
"SyntheticMonitorId": "[variables('pingTestName')]"
}
},

{
//
// Availability test: part 2, the alert rule
//
"name": "[variables('pingAlertRuleName')]",
"type": "[Link]/alertrules",
"apiVersion": "2014-04-01",
"location": "[parameters('appLocation')]",
"dependsOn": [
"[resourceId('[Link]/webtests', variables('pingTestName'))]"
],
"tags": {
"[concat('hidden-link:', resourceId('[Link]/components', parameters('appName')))]":
"Resource",
"[concat('hidden-link:', resourceId('[Link]/webtests', variables('pingTestName')))]":
"Resource"
},
"properties": {
"name": "[variables('pingAlertRuleName')]",
"description": "alert for web test",
"isEnabled": true,
"condition": {
"$type":
"[Link],
[Link]",
"[Link]": "[Link]",
"dataSource": {
"$type": "[Link],
[Link]",
"[Link]": "[Link]",
"resourceUri": "[resourceId('[Link]/webtests', variables('pingTestName'))]",
"metricName": "GSMT_AvRaW"
},
"windowSize": "PT15M", // Take action if changed state for 15 minutes
"failedLocationCount": 2
},
"actions": [
{
"$type": "[Link],
[Link]",
"[Link]": "[Link]",
"sendToServiceOwners": true,
"customEmails": []
}
]
}
}
}
To discover the codes for other test locations, or to automate the creation of more complex web tests, create an
example manually and then parameterize the code from Azure Resource Manager.

Add more resources


To automate the creation of any other resource of any kind, create an example manually, and then copy and
parameterize its code from Azure Resource Manager.
1. Open Azure Resource Manager. Navigate down through
subscriptions/resourceGroups/<your resource group>/providers/[Link]/components , to your
application resource.

Components are the basic Application Insights resources for displaying applications. There are separate
resources for the associated alert rules and availability web tests.
2. Copy the JSON of the component into the appropriate place in [Link] .
3. Delete these properties:
id
InstrumentationKey
CreationDate
TenantId
4. Open the webtests and alertrules sections and copy the JSON for individual items into your template.
(Don't copy from the webtests or alertrules nodes: go into the items under them.)
Each web test has an associated alert rule, so you have to copy both of them.
You can also include alerts on metrics. Metric names.
5. Insert this line in each resource:
"apiVersion": "2015-05-01",
Parameterize the template
Now you have to replace the specific names with parameters. To parameterize a template, you write expressions
using a set of helper functions.
You can't parameterize just part of a string, so use concat() to build strings.
Here are examples of the substitutions you'll want to make. There are several occurrences of each substitution.
You might need others in your template. These examples use the parameters and variables we defined at the top
of the template.

FIND REPLACE WITH

"hidden- "[concat('hidden-link:',
link:/subscriptions/.../components/MyAppName"
resourceId('[Link]/components',
parameters('appName')))]"

"/subscriptions/.../alertrules/myAlertName- "[resourceId('[Link]/alertrules',
myAppName-subsId", variables('alertRuleName'))]",

"/subscriptions/.../webtests/myTestName-myAppName", "[resourceId('[Link]/webtests',
parameters('webTestName'))]",

"myWebTest-myAppName" "[variables(testName)]"'

"myTestName-myAppName-subsId" "[variables('alertRuleName')]"

"myAppName" "[parameters('appName')]"

"myappname" (lower case) "[toLower(parameters('appName'))]"

"<WebTest Name=\"myWebTest\" ... [concat('<WebTest Name=\"',


Url=\"[Link] ...>" parameters('webTestName'),
'\" ... Url=\"', parameters('Url'),
'\"...>')]"
Delete Guid and Id.

Set dependencies between the resources


Azure should set up the resources in strict order. To make sure one setup completes before the next begins, add
dependency lines:
In the availability test resource:
"dependsOn": ["[resourceId('[Link]/components', parameters('appName'))]"],

In the alert resource for an availability test:


"dependsOn": ["[resourceId('[Link]/webtests', variables('testName'))]"],

Next steps
Other automation articles:
Create an Application Insights resource - quick method without using a template.
Set up Alerts
Create web tests
Send Azure Diagnostics to Application Insights
Deploy to Azure from GitHub
Create release annotations
PowerShell script to create an Application Insights
resource
11/1/2017 • 2 min to read • Edit Online

When you want to monitor a new application - or a new version of an application - with Azure Application Insights,
you set up a new resource in Microsoft Azure. This resource is where the telemetry data from your app is analyzed
and displayed.
You can automate the creation of a new resource by using PowerShell.
For example, if you are developing a mobile device app, it's likely that, at any time, there will be several published
versions of your app in use by your customers. You don't want to get the telemetry results from different versions
mixed up. So you get your build process to create a new resource for each build.

NOTE
If you want to create a set of resources all at the same time, consider creating the resources using an Azure template.

Script to create an Application Insights resource


See the relevant cmdlet specs:
New-AzureRmResource
New-AzureRmRoleAssignment
PowerShell Script
###########################################
# Set Values
###########################################

# If running manually, uncomment before the first


# execution to login to the Azure Portal:

# Add-AzureRmAccount / Login-AzureRmAccount

# Set the name of the Application Insights Resource

$appInsightsName = "TestApp"

# Set the application name used for the value of the Tag "AppInsightsApp"

$applicationTagName = "MyApp"

# Set the name of the Resource Group to use.


# Default is the application name.
$resourceGroupName = "MyAppResourceGroup"

###################################################
# Create the Resource and Output the name and iKey
###################################################

# Select the azure subscription


Select-AzureSubscription -SubscriptionName "MySubscription"

# Create the App Insights Resource

$resource = New-AzureRmResource `
-ResourceName $appInsightsName `
-ResourceGroupName $resourceGroupName `
-Tag @{ applicationType = "web"; applicationName = $applicationTagName} `
-ResourceType "[Link]/components" `
-Location "East US" ` # or North Europe, West Europe, South Central US
-PropertyObject @{"Application_Type"="web"} `
-Force

# Give owner access to the team

New-AzureRmRoleAssignment `
-SignInName "myteam@[Link]" `
-RoleDefinitionName Owner `
-Scope $[Link]

# Display iKey
Write-Host "App Insights Name = " $[Link]
Write-Host "IKey = " $[Link]

What to do with the iKey


Each resource is identified by its instrumentation key (iKey). The iKey is an output of the resource creation script.
Your build script should provide the iKey to the Application Insights SDK embedded in your app.
There are two ways to make the iKey available to the SDK:
In [Link]:
<instrumentationkey> ikey </instrumentationkey>
Or in initialization code:
[Link]. [Link] = " iKey
";

See also
Create Application Insights and web test resources from templates
Set up monitoring of Azure diagnostics with PowerShell
Set alerts by using PowerShell
Use PowerShell to set alerts in Application Insights
11/1/2017 • 3 min to read • Edit Online

You can automate the configuration of alerts in Application Insights.


In addition, you can set webhooks to automate your response to an alert.

NOTE
If you want to create resources and alerts at the same time, consider using an Azure Resource Manager template.

One-time setup
If you haven't used PowerShell with your Azure subscription before:
Install the Azure Powershell module on the machine where you want to run the scripts.
Install Microsoft Web Platform Installer (v5 or higher).
Use it to install Microsoft Azure Powershell

Connect to Azure
Start Azure PowerShell and connect to your subscription:

Add-AzureAccount

Get alerts
Get-AzureAlertRmRule -ResourceGroup "Fabrikam" [-Name "My rule"] [-DetailedOutput]

Add alert
Add-AlertRule -Name "{ALERT NAME}" -Description "{TEXT}" `
-ResourceGroup "{GROUP NAME}" `
-ResourceId "/subscriptions/{SUBSCRIPTION ID}/resourcegroups/{GROUP
NAME}/providers/[Link]/components/{APP RESOURCE NAME}" `
-MetricName "{METRIC NAME}" `
-Operator GreaterThan `
-Threshold {NUMBER} `
-WindowSize {HH:MM:SS} `
[-SendEmailToServiceOwners] `
[-CustomEmails "EMAIL1@[Link]","EMAIL2@[Link]" ] `
-Location "East US" // must be East US at present
-RuleType Metric

Example 1
Email me if the server's response to HTTP requests, averaged over 5 minutes, is slower than 1 second. My
Application Insights resource is called IceCreamWebApp, and it is in resource group Fabrikam. I am the owner of
the Azure subscription.
The GUID is the subscription ID (not the instrumentation key of the application).

Add-AlertRule -Name "slow responses" `


-Description "email me if the server responds slowly" `
-ResourceGroup "Fabrikam" `
-ResourceId "/subscriptions/00000000-0000-0000-0000-
000000000000/resourcegroups/Fabrikam/providers/[Link]/components/IceCreamWebApp" `
-MetricName "[Link]" `
-Operator GreaterThan `
-Threshold 1 `
-WindowSize [Link] `
-SendEmailToServiceOwners `
-Location "East US" -RuleType Metric

Example 2
I have an application in which I use TrackMetric() to report a metric named "salesPerHour." Send an email to my
colleagues if "salesPerHour" drops below 100, averaged over 24 hours.

Add-AlertRule -Name "poor sales" `


-Description "slow sales alert" `
-ResourceGroup "Fabrikam" `
-ResourceId "/subscriptions/00000000-0000-0000-0000-
000000000000/resourcegroups/Fabrikam/providers/[Link]/components/IceCreamWebApp" `
-MetricName "salesPerHour" `
-Operator LessThan `
-Threshold 100 `
-WindowSize [Link] `
-CustomEmails "satish@[Link]","lei@[Link]" `
-Location "East US" -RuleType Metric

The same rule can be used for the metric reported by using the measurement parameter of another tracking call
such as TrackEvent or trackPageView.

Metric names
METRIC NAME SCREEN NAME DESCRIPTION

[Link] Browser exceptions Count of uncaught exceptions thrown


in the browser.

[Link] Server exceptions Count of unhandled exceptions thrown


by the app

[Link] Client processing time Time between receiving the last byte of
a document until the DOM is loaded.
Async requests may still be processing.

Page load network connect time


[Link] Time the browser takes to connect to
the network. Can be 0 if cached.

[Link] response time Time between browser sending request


to starting to receive response.
METRIC NAME SCREEN NAME DESCRIPTION

[Link] Send request time Time taken by browser to send request.

[Link] Browser page load time Time from user request until DOM,
stylesheets, scripts and images are
loaded.

Available memory
performanceCounter.available_bytes.value Physical memory immediately available
for a process or for system use.

Process IO
performanceCounter.io_data_bytes_per_sec.value Rate Total bytes per second read and written
to files, network and devices.

exception rate
performanceCounter.number_of_exceps_thrown_per_sec.value Exceptions thrown per second.

Process CPU
performanceCounter.percentage_processor_time.value The percentage of elapsed time of all
process threads used by the processor
to execution instructions for the
applications process.

Processor time
performanceCounter.percentage_processor_total.value The percentage of time that the
processor spends in non-Idle threads.

Process private bytes


performanceCounter.process_private_bytes.value Memory exclusively assigned to the
monitored application's processes.

[Link] request execution time


performanceCounter.request_execution_time.value Execution time of the most recent
request.

[Link] requests in execution queue


performanceCounter.requests_in_application_queue.value Length of the application request
queue.

[Link] request rate


performanceCounter.requests_per_sec.value Rate of all requests to the application
per second from [Link].

Dependency failures
[Link] Count of failed calls made by the server
application to external resources.

[Link] Server response time Time between receiving an HTTP


request and finishing sending the
response.

[Link] Request rate Rate of all requests to the application


per second.

[Link] Failed requests Count of HTTP requests that resulted in


a response code >= 400

[Link] Page views Count of client user requests for a web


page. Synthetic traffic is filtered out.

{your custom metric name} {Your metric name} Your metric value reported by
TrackMetric or in the measurements
parameter of a tracking call.
The metrics are sent by different telemetry modules:

METRIC GROUP COLLECTOR MODULE

basicExceptionBrowser, Browser JavaScript


clientPerformance,
view

performanceCounter Performance

remoteDependencyFailed Dependency

request, Server request


requestFailed

Webhooks
You can automate your response to an alert. Azure will call a web address of your choice when an alert is raised.

See also
Script to configure Application Insights
Create Application Insights and web test resources from templates
Automate coupling Microsoft Azure Diagnostics to Application Insights
Automate your response to an alert
Using PowerShell to set up Application Insights for
an Azure web app
11/1/2017 • 2 min to read • Edit Online

Microsoft Azure can be configured to send Azure Diagnostics to Azure Application Insights. The diagnostics relate
to Azure Cloud Services and Azure VMs. They complement the telemetry that you send from within the app using
the Application Insights SDK. As part of automating the process of creating new resources in Azure, you can
configure diagnostics using PowerShell.

Azure template
If the web app is in Azure and you create your resources using an Azure Resource Manager template, you can
configure Application Insights by adding this to the resources node:

{
resources: [
/* Create Application Insights resource */
{
"apiVersion": "2015-05-01",
"type": "[Link]/components",
"name": "nameOfAIAppResource",
"location": "centralus",
"kind": "web",
"properties": { "ApplicationId": "nameOfAIAppResource" },
"dependsOn": [
"[concat('[Link]/sites/', myWebAppName)]"
]
}
]
}

nameOfAIAppResource - a name for the Application Insights resource


myWebAppName - the id of the web app

Enable diagnostics extension as part of deploying a Cloud Service


The New-AzureDeployment cmdlet has a parameter ExtensionConfiguration , which takes an array of diagnostics
configurations. These can be created using the New-AzureServiceDiagnosticsExtensionConfig cmdlet. For example:
$service_package = "[Link]"
$service_config = "[Link]"
$diagnostics_storagename = "myservicediagnostics"
$webrole_diagconfigpath = "[Link]"
$workerrole_diagconfigpath = "[Link]"

$primary_storagekey = (Get-AzureStorageKey `
-StorageAccountName "$diagnostics_storagename").Primary
$storage_context = New-AzureStorageContext `
-StorageAccountName $diagnostics_storagename `
-StorageAccountKey $primary_storagekey

$webrole_diagconfig = `
New-AzureServiceDiagnosticsExtensionConfig `
-Role "WebRole" -Storage_context $storageContext `
-DiagnosticsConfigurationPath $webrole_diagconfigpath
$workerrole_diagconfig = `
New-AzureServiceDiagnosticsExtensionConfig `
-Role "WorkerRole" `
-StorageContext $storage_context `
-DiagnosticsConfigurationPath $workerrole_diagconfigpath

New-AzureDeployment `
-ServiceName $service_name `
-Slot Production `
-Package $service_package `
-Configuration $service_config `
-ExtensionConfiguration @($webrole_diagconfig,$workerrole_diagconfig)

Enable diagnostics extension on an existing Cloud Service


On an existing service, use Set-AzureServiceDiagnosticsExtension .

$service_name = "MyService"
$diagnostics_storagename = "myservicediagnostics"
$webrole_diagconfigpath = "[Link]"
$workerrole_diagconfigpath = "[Link]"
$primary_storagekey = (Get-AzureStorageKey `
-StorageAccountName "$diagnostics_storagename").Primary
$storage_context = New-AzureStorageContext `
-StorageAccountName $diagnostics_storagename `
-StorageAccountKey $primary_storagekey

Set-AzureServiceDiagnosticsExtension `
-StorageContext $storage_context `
-DiagnosticsConfigurationPath $webrole_diagconfigpath `
-ServiceName $service_name `
-Slot Production `
-Role "WebRole"
Set-AzureServiceDiagnosticsExtension `
-StorageContext $storage_context `
-DiagnosticsConfigurationPath $workerrole_diagconfigpath `
-ServiceName $service_name `
-Slot Production `
-Role "WorkerRole"

Get current diagnostics extension configuration


Get-AzureServiceDiagnosticsExtension -ServiceName "MyService"

Remove diagnostics extension


Remove-AzureServiceDiagnosticsExtension -ServiceName "MyService"

If you enabled the diagnostics extension using either Set-AzureServiceDiagnosticsExtension or


New-AzureServiceDiagnosticsExtensionConfig without the Role parameter, then you can remove the extension
using Remove-AzureServiceDiagnosticsExtension without the Role parameter. If the Role parameter was used when
enabling the extension then it must also be used when removing the extension.
To remove the diagnostics extension from each individual role:

Remove-AzureServiceDiagnosticsExtension -ServiceName "MyService" -Role "WebRole"

See also
Monitor Azure Cloud Services apps with Application Insights
Send Azure Diagnostics to Application Insights
Automate configuring alerts
Automate Azure Application Insights processes with
the connector for Microsoft Flow
11/1/2017 • 3 min to read • Edit Online

Do you find yourself repeatedly running the same queries on your telemetry data to check that your service is
functioning properly? Are you looking to automate these queries for finding trends and anomalies and then build
your own workflows around them? The Azure Application Insights connector (preview) for Microsoft Flow is the
right tool for these purposes.
With this integration, you can now automate numerous processes without writing a single line of code. After you
create a flow by using an Application Insights action, the flow automatically runs your Application Insights Analytics
query.
You can add additional actions as well. Microsoft Flow makes hundreds of actions available. For example, you can
use Microsoft Flow to automatically send an email notification or create a bug in Visual Studio Team Services. You
can also use one of the many templates that are available for the connector for Microsoft Flow. These templates
speed up the process of creating a flow.

Create a flow for Application Insights


In this tutorial, you will learn how to create a flow that uses the Analytics auto-cluster algorithm to group attributes
in the data for a web application. The flow automatically sends the results by email, just one example of how you
can use Microsoft Flow and Application Insights Analytics together.
Step 1: Create a flow
1. Sign in to Microsoft Flow, and then select My Flows.
2. Click Create a flow from blank.
Step 2: Create a trigger for your flow
1. Select Schedule, and then select Schedule - Recurrence.
2. In the Frequency box, select Day, and in the Interval box, enter 1.

Step 3: Add an Application Insights action


1. Click New step, and then click Add an action.
2. Search for Azure Application Insights.
3. Click Azure Application Insights – Visualize Analytics query Preview.
Step 4: Connect to an Application Insights resource
To complete this step, you need an application ID and an API key for your resource. You can retrieve them from the
Azure portal, as shown in the following diagram:
Provide a name for your connection, along with the application ID and API key.

Step 5: Specify the Analytics query and chart type


This example query selects the failed requests within the last day and correlates them with exceptions that occurred
as part of the operation. Analytics correlates them based on the operation_Id identifier. The query then segments
the results by using the autocluster algorithm.
When you create your own queries, verify that they are working properly in Analytics before you add it to your
flow.
Add the following Analytics query, and then select the HTML table chart type.

requests
| where timestamp > ago(1d)
| where success == "False"
| project name, operation_Id
| join ( exceptions
| project problemId, outerMessage, operation_Id
) on operation_Id
| evaluate autocluster()

Step 6: Configure the flow to send email


1. Click New step, and then click Add an action.
2. Search for Office 365 Outlook.
3. Click Office 365 Outlook – Send an email.
4. In the Send an email window, do the following:
a. Type the email address of the recipient.
b. Type a subject for the email.
c. Click anywhere in the Body box and then, on the dynamic content menu that opens at the right, select
Body.
d. Click Show advanced options.

5. On the dynamic content menu, do the following:


a. Select Attachment Name.
b. Select Attachment Content.
c. In the Is HTML box, select Yes.
Step 7: Save and test your flow
In the Flow name box, add a name for your flow, and then click Create flow.

You can wait for the trigger to run this action, or you can run the flow immediately by running the trigger on
demand.
When the flow runs, the recipients you have specified in the email list receive an email message that looks like the
following:

Next steps
Learn more about creating Analytics queries.
Learn more about Microsoft Flow.
Automate Application Insights processes by using
Logic Apps
11/1/2017 • 3 min to read • Edit Online

Do you find yourself repeatedly running the same queries on your telemetry data to check whether your service is
functioning properly? Are you looking to automate these queries for finding trends and anomalies and then build
your own workflows around them? The Azure Application Insights connector (preview) for Logic Apps is the right
tool for this purpose.
With this integration, you can automate numerous processes without writing a single line of code. You can create a
logic app with the Application Insights connector to quickly automate any Application Insights process.
You can add additional actions as well. The Logic Apps feature of Azure App Service makes hundreds of actions
available. For example, by using a logic app, you can automatically send an email notification or create a bug in
Visual Studio Team Services. You can also use one of the many available templates to help speed up the process of
creating your logic app.

Create a logic app for Application Insights


In this tutorial, you learn how to create a logic app that uses the Analytics autocluster algorithm to group attributes
in the data for a web application. The flow automatically sends the results by email, just one example of how you
can use Application Insights Analytics and Logic Apps together.
Step 1: Create a logic app
1. Sign in to the Azure portal.
2. In the New pane, select Web + Mobile, and then select Logic App.

Step 2: Create a trigger for your logic app


1. In the Logic App Designer window, under Start with a common trigger, select Recurrence.
2. In the Frequency box, select Day and then, in the Interval box, type 1.

Step 3: Add an Application Insights action


1. Click New step, and then click Add an action.
2. In the Choose an action search box, type Azure Application Insights.
3. Under Actions, click Azure Application Insights – Visualize Analytics query Preview.
Step 4: Connect to an Application Insights resource
To complete this step, you need an application ID and an API key for your resource. You can retrieve them from the
Azure portal, as shown in the following diagram:
Provide a name for your connection, the application ID, and the API key.

Step 5: Specify the Analytics query and chart type


In the following example, the query selects the failed requests within the last day and correlates them with
exceptions that occurred as part of the operation. Analytics correlates the failed requests, based on the operation_Id
identifier. The query then segments the results by using the autocluster algorithm.
When you create your own queries, verify that they are working properly in Analytics before you add it to your
flow.
1. In the Query box, add the following Analytics query:

requests
| where timestamp > ago(1d)
| where success == "False"
| project name, operation_Id
| join ( exceptions
| project problemId, outerMessage, operation_Id
) on operation_Id
| evaluate autocluster()

2. In the Chart Type box, select Html Table.

Step 6: Configure the logic app to send email


1. Click New step, and then select Add an action.
2. In the search box, type Office 365 Outlook.
3. Click Office 365 Outlook – Send an email.
4. In the Send an email window, do the following:
a. Type the email address of the recipient.
b. Type a subject for the email.
c. Click anywhere in the Body box and then, on the dynamic content menu that opens at the right, select
Body.
d. Click Show advanced options.

5. On the dynamic content menu, do the following:


a. Select Attachment Name.
b. Select Attachment Content.
c. In the Is HTML box, select Yes.
Step 7: Save and test your logic app
Click Save to save your changes.
You can wait for the trigger to run the logic app, or you can run the logic app immediately by selecting Run.

When your logic app runs, the recipients you specified in the email list will receive an email that looks like the
following:

Next steps
Learn more about creating Analytics queries.
Learn more about Logic Apps.
Application Insights API for custom events and
metrics
1/11/2018 • 25 min to read • Edit Online

Insert a few lines of code in your application to find out what users are doing with it, or to help diagnose
issues. You can send telemetry from device and desktop apps, web clients, and web servers. Use the
Azure Application Insights core telemetry API to send custom events and metrics, and your own versions
of standard telemetry. This API is the same API that the standard Application Insights data collectors use.

API summary
The API is uniform across all platforms, apart from a few small variations.

METHOD USED FOR

TrackPageView Pages, screens, blades, or forms.

TrackEvent User actions and other events. Used to track user


behavior or to monitor performance.

TrackMetric Performance measurements such as queue lengths not


related to specific events.

TrackException Logging exceptions for diagnosis. Trace where they occur


in relation to other events and examine stack traces.

TrackRequest Logging the frequency and duration of server requests


for performance analysis.

TrackTrace Diagnostic log messages. You can also capture third-


party logs.

TrackDependency Logging the duration and frequency of calls to external


components that your app depends on.

You can attach properties and metrics to most of these telemetry calls.

Before you start


If you don't have a reference on Application Insights SDK yet:
Add the Application Insights SDK to your project:
[Link] project
Java project
[Link] project
JavaScript in each webpage
In your device or web server code, include:
C#: using [Link];
Visual Basic: Imports [Link]

Java: import [Link];

[Link]: var applicationInsights = require("applicationinsights");

Get a TelemetryClient instance


Get an instance of TelemetryClient (except in JavaScript in webpages):
C#

private TelemetryClient telemetry = new TelemetryClient();

Visual Basic

Private Dim telemetry As New TelemetryClient

Java

private TelemetryClient telemetry = new TelemetryClient();

[Link]

var telemetry = [Link];

TelemetryClient is thread-safe.
For [Link] and Java projects, we recommend that you create an instance of TelemetryClient for each
module of your app. For instance, you may have one TelemetryClient instance in your web service to
report incoming HTTP requests, and another in a middleware class to report business logic events. You
can set properties such as [Link] to track users and sessions, or
[Link] to identify the machine. This information is attached to all events that
the instance sends.
In [Link] projects, you can use new [Link](instrumentationKey?) to create
a new instance, but this is recommended only for scenarios that require isolated configuration from the
singleton defaultClient .

TrackEvent
In Application Insights, a custom event is a data point that you can display in Metrics Explorer as an
aggregated count, and in Diagnostic Search as individual occurrences. (It isn't related to MVC or other
framework "events.")
Insert TrackEvent calls in your code to count various events. How often users choose a particular feature,
how often they achieve particular goals, or maybe how often they make particular types of mistakes.
For example, in a game app, send an event whenever a user wins the game:
JavaScript

[Link]("WinGame");
C#

[Link]("WinGame");

Visual Basic

[Link]("WinGame")

Java

[Link]("WinGame");

[Link]

[Link]({name: "WinGame"});

View your events in the Microsoft Azure portal


To see a count of your events, open a Metrics Explorer blade, add a new chart, and select Events.

To compare the counts of different events, set the chart type to Grid, and group by event name:

On the grid, click through an event name to see individual occurrences of that event. To see more detail -
click any occurrence in the list.
To focus on specific events in either Search or Metrics Explorer, set the blade's filter to the event names
that you're interested in:

Custom events in Analytics


The telemetry is available in the customEvents table in Application Insights Analytics. Each row represents
a call to trackEvent(..) in your app.
If sampling is in operation, the itemCount property shows a value greater than 1. For example
itemCount==10 means that of 10 calls to trackEvent(), the sampling process only transmitted one of
them. To get a correct count of custom events, you should use therefore use code such as
customEvent | summarize sum(itemCount) .

TrackMetric
Application Insights can chart metrics that are not attached to particular events. For example, you could
monitor a queue length at regular intervals. With metrics, the individual measurements are of less
interest than the variations and trends, and so statistical charts are useful.
In order to send metrics to Application Insights, you can use the TrackMetric(..) API. There are two
ways to send a metric:
Single value. Every time you perform a measurement in your application, you send the
corresponding value to Application Insights. For example, assume that you have a metric
describing the number of items in a container. During a particular time period, you first put three
items into the container and then you remove two items. Accordingly, you would call TrackMetric
twice: first passing the value 3 and then the value -2 . Application Insights stores both values on
your behalf.
Aggregation. When working with metrics, every single measurement is rarely of interest. Instead a
summary of what happened during a particular time period is important. Such a summary is
called aggregation. In the above example, the aggregate metric sum for that time period is 1 and
the count of the metric values is 2 . When using the aggregation approach, you only invoke
TrackMetric once per time period and send the aggregate values. This is the recommended
approach since it can significantly reduce the cost and performance overhead by sending fewer
data points to Application Insights, while still collecting all relevant information.
Examples:
Single values
To send a single metric value:
JavaScript

[Link]("queueLength", 42.0);

C#, Java

var sample = new MetricTelemetry();


[Link] = "metric name";
[Link] = 42.3;
[Link](sample);

[Link]

[Link]({name: "queueLength", value: 42.0});

Aggregating metrics
It is recommended to aggregate metrics before sending them from your app, to reduce bandwidth, cost
and to improve performance. Here is an example of aggregating code:
C#

using System;
using [Link];
using [Link];

using [Link];
using [Link];

namespace MetricAggregationExample
{
/// <summary>
/// Aggregates metric values for a single time period.
/// </summary>
internal class MetricAggregator
{
{
private SpinLock _trackLock = new SpinLock();

public DateTimeOffset StartTimestamp { get; }


public int Count { get; private set; }
public double Sum { get; private set; }
public double SumOfSquares { get; private set; }
public double Min { get; private set; }
public double Max { get; private set; }
public double Average { get { return (Count == 0) ? 0 : (Sum / Count); } }
public double Variance { get { return (Count == 0) ? 0 : (SumOfSquares /
Count)
- (Average *
Average); } }
public double StandardDeviation { get { return [Link](Variance); } }

public MetricAggregator(DateTimeOffset startTimestamp)


{
[Link] = startTimestamp;
}

public void TrackValue(double value)


{
bool lockAcquired = false;

try
{
_trackLock.Enter(ref lockAcquired);

if ((Count == 0) || (value < Min)) { Min = value; }


if ((Count == 0) || (value > Max)) { Max = value; }
Count++;
Sum += value;
SumOfSquares += value * value;
}
finally
{
if (lockAcquired)
{
_trackLock.Exit();
}
}
}
} // internal class MetricAggregator

/// <summary>
/// Accepts metric values and sends the aggregated values at 1-minute intervals.
/// </summary>
public sealed class Metric : IDisposable
{
private static readonly TimeSpan AggregationPeriod = [Link](60);

private bool _isDisposed = false;


private MetricAggregator _aggregator = null;
private readonly TelemetryClient _telemetryClient;

public string Name { get; }

public Metric(string name, TelemetryClient telemetryClient)


{
[Link] = name ?? "null";
this._aggregator = new MetricAggregator([Link]);
this._telemetryClient = telemetryClient ?? throw new
ArgumentNullException(nameof(telemetryClient));

[Link]([Link]);
}

public void TrackValue(double value)


{
{
MetricAggregator currAggregator = _aggregator;
if (currAggregator != null)
{
[Link](value);
}
}

private async Task AggregatorLoopAsync()


{
while (_isDisposed == false)
{
try
{
// Wait for end end of the aggregation period:
await [Link](AggregationPeriod).ConfigureAwait(continueOnCapturedContext:
false);

// Atomically snap the current aggregation:


MetricAggregator nextAggregator = new MetricAggregator([Link]);
MetricAggregator prevAggregator = [Link](ref _aggregator,
nextAggregator);

// Only send anything is at least one value was measured:


if (prevAggregator != null && [Link] > 0)
{
// Compute the actual aggregation period length:
TimeSpan aggPeriod = [Link] -
[Link];
if ([Link] < 1)
{
aggPeriod = [Link](1);
}

// Construct the metric telemetry item and send:


var aggregatedMetricTelemetry = new MetricTelemetry(
Name,
[Link],
[Link],
[Link],
[Link],
[Link]);
[Link]["AggregationPeriod"] =
[Link]("c");

_telemetryClient.Track(aggregatedMetricTelemetry);
}
}
catch(Exception ex)
{
// log ex as appropriate for your application
}
}
}

void [Link]()
{
_isDisposed = true;
_aggregator = null;
}
} // public sealed class Metric
}

Custom metrics in Metrics Explorer


To see the results, open Metrics Explorer and add a new chart. Edit the chart to show your metric.
NOTE
Your custom metric might take several minutes to appear in the list of available metrics.

Custom metrics in Analytics


The telemetry is available in the customMetrics table in Application Insights Analytics. Each row
represents a call to trackMetric(..) in your app.
valueSum - This is the sum of the measurements. To get the mean value, divide by valueCount .
valueCount - The number of measurements that were aggregated into this trackMetric(..) call.

Page views
In a device or webpage app, page view telemetry is sent by default when each screen or page is loaded.
But you can change that to track page views at additional or different times. For example, in an app that
displays tabs or blades, you might want to track a page whenever the user opens a new blade.
User and session data is sent as properties along with page views, so the user and session charts come
alive when there is page view telemetry.
Custom page views
JavaScript

[Link]("tab1");

C#

[Link]("GameReviewPage");

Visual Basic

[Link]("GameReviewPage")

If you have several tabs within different HTML pages, you can specify the URL too:

[Link]("tab1", "[Link]

Timing page views


By default, the times reported as Page view load time are measured from when the browser sends the
request, until the browser's page load event is called.
Instead, you can either:
Set an explicit duration in the trackPageView call:
[Link]("tab1", null, null, null, durationInMilliseconds); .
Use the page view timing calls startTrackPage and stopTrackPage .

JavaScript
// To start timing a page:
[Link]("Page1");

...

// To stop timing and log the page:


[Link]("Page1", url, properties, measurements);

The name that you use as the first parameter associates the start and stop calls. It defaults to the current
page name.
The resulting page load durations displayed in Metrics Explorer are derived from the interval between the
start and stop calls. It's up to you what interval you actually time.
Page telemetry in Analytics
In Analytics two tables show data from browser operations:
The pageViews table contains data about the URL and page title
The browserTimings table contains data about client performance, such as the time taken to process
the incoming data
To find how long the browser takes to process different pages:

browserTimings | summarize avg(networkDuration), avg(processingDuration), avg(totalDuration) by name

To discover the popularities of different browsers:

pageViews | summarize count() by client_Browser

To associate page views to AJAX calls, join with dependencies:

pageViews | join (dependencies) on operation_Id

TrackRequest
The server SDK uses TrackRequest to log HTTP requests.
You can also call it yourself if you want to simulate requests in a context where you don't have the web
service module running.
However, the recommended way to send request telemetry is where the request acts as an operation
context.

Operation context
You can correlate telemetry items together by associating them with operation context. The standard
request-tracking module does this for exceptions and other events that are sent while an HTTP request is
being processed. In Search and Analytics, you can easily find any events associated with the request
using its operation Id.
See Telemetry correlation in Application Insights for more details on correlation.
When tracking telemetry manually, the easiest way to ensure telemetry correlation by using this pattern:
C#

// Establish an operation context and associated telemetry item:


using (var operation = [Link]<RequestTelemetry>("operationName"))
{
// Telemetry sent in here will use the same operation ID.
...
[Link](...); // or other Track* calls
...
// Set properties of containing telemetry item--for example:
[Link] = "200";

// Optional: explicitly send telemetry item:


[Link](operation);

} // When operation is disposed, telemetry item is sent.

Along with setting an operation context, StartOperation creates a telemetry item of the type that you
specify. It sends the telemetry item when you dispose the operation, or if you explicitly call
StopOperation . If you use RequestTelemetry as the telemetry type, its duration is set to the timed interval
between start and stop.
Telemetry items reported within a scope of operation become 'children' of such operation. Operation
contexts could be nested.
In Search, the operation context is used to create the Related Items list:

See Track custom operations with Application Insights .NET SDK for more information on custom
operations tracking.
Requests in Analytics
In Application Insights Analytics, requests show up in the requests table.
If sampling is in operation, the itemCount property will show a value greater than 1. For example
itemCount==10 means that of 10 calls to trackRequest(), the sampling process only transmitted one of
them. To get a correct count of requests and average duration segmented by request names, use code
such as:

requests | summarize count = sum(itemCount), avgduration = avg(duration) by name


TrackException
Send exceptions to Application Insights:
To count them, as an indication of the frequency of a problem.
To examine individual occurrences.
The reports include the stack traces.
C#

try
{
...
}
catch (Exception ex)
{
[Link](ex);
}

JavaScript

try
{
...
}
catch (ex)
{
[Link](ex);
}

[Link]

try
{
...
}
catch (ex)
{
[Link]({exception: ex});
}

The SDKs catch many exceptions automatically, so you don't always have to call TrackException explicitly.
[Link]: Write code to catch exceptions.
J2EE: Exceptions are caught automatically.
JavaScript: Exceptions are caught automatically. If you want to disable automatic collection, add a
line to the code snippet that you insert in your webpages:

({
instrumentationKey: "your key"
, disableExceptionTracking: true
})

Exceptions in Analytics
In Application Insights Analytics, exceptions show up in the exceptions table.
If sampling is in operation, the itemCount property shows a value greater than 1. For example
itemCount==10 means that of 10 calls to trackException(), the sampling process only transmitted one of
them. To get a correct count of exceptions segmented by type of exception, use code such as:

exceptions | summarize sum(itemCount) by type

Most of the important stack information is already extracted into separate variables, but you can pull
apart the details structure to get more. Since this structure is dynamic, you should cast the result to the
type you expect. For example:

exceptions
| extend method2 = tostring(details[0].parsedStack[1].method)

To associate exceptions with their related requests, use a join:

exceptions
| join (requests) on operation_Id

TrackTrace
Use TrackTrace to help diagnose problems by sending a "breadcrumb trail" to Application Insights. You
can send chunks of diagnostic data and inspect them in Diagnostic Search.
Log adapters use this API to send third-party logs to the portal.
C#

[Link](message, [Link], properties);

[Link]

[Link]({message: message, severity:[Link],


properties:properties});

You can search on message content, but (unlike property values) you can't filter on it.
The size limit on message is much higher than the limit on properties. An advantage of TrackTrace is that
you can put relatively long data in the message. For example, you can encode POST data there.
In addition, you can add a severity level to your message. And, like other telemetry, you can add property
values to help you filter or search for different sets of traces. For example:

var telemetry = new [Link]();


[Link]("Slow database response",
[Link],
new Dictionary<string,string> { {"database", [Link]} });

In Search, you can then easily filter out all the messages of a particular severity level that relate to a
particular database.
Traces in Analytics
In Application Insights Analytics, calls to TrackTrace show up in the traces table.
If sampling is in operation, the itemCount property shows a value greater than 1. For example
itemCount==10 means that of 10 calls to trackTrace() , the sampling process only transmitted one of
them. To get a correct count of trace calls, you should use therefore code such as
traces | summarize sum(itemCount) .

TrackDependency
Use the TrackDependency call to track the response times and success rates of calls to an external piece
of code. The results appear in the dependency charts in the portal.

var success = false;


var startTime = [Link];
var timer = [Link]();
try
{
success = [Link]();
}
finally
{
[Link]();
[Link]("myDependency", "myCall", startTime, [Link], success);
}

var success = false;


var startTime = new Date().getTime();
try
{
success = [Link]();
}
finally
{
var elapsed = new Date() - startTime;
[Link]({dependencyTypeName: "myDependency", name: "myCall", duration: elapsed,
success:success});
}

Remember that the server SDKs include a dependency module that discovers and tracks certain
dependency calls automatically--for example, to databases and REST APIs. You have to install an agent on
your server to make the module work. You use this call if you want to track calls that the automated
tracking doesn't catch, or if you don't want to install the agent.
To turn off the standard dependency-tracking module, edit [Link] and delete the
reference to [Link] .
Dependencies in Analytics
In Application Insights Analytics, trackDependency calls show up in the dependencies table.
If sampling is in operation, the itemCount property shows a value greater than 1. For example
itemCount==10 means that of 10 calls to trackDependency(), the sampling process only transmitted one
of them. To get a correct count of dependencies segmented by target component, use code such as:

dependencies | summarize sum(itemCount) by target

To associate dependencies with their related requests, use a join:

dependencies
| join (requests) on operation_Id
Flushing data
Normally, the SDK sends data at times chosen to minimize the impact on the user. However, in some
cases, you might want to flush the buffer--for example, if you are using the SDK in an application that
shuts down.
C#

[Link]();

// Allow some time for flushing before shutdown.


[Link](1000);

[Link]

[Link]();

Note that the function is asynchronous for the server telemetry channel.

Authenticated users
In a web app, users are (by default) identified by cookies. A user might be counted more than once if they
access your app from a different machine or browser, or if they delete cookies.
If users sign in to your app, you can get a more accurate count by setting the authenticated user ID in the
browser code:
JavaScript

// Called when my app has identified the user.


function Authenticated(signInId) {
var validatedId = [Link](/[,;=| ]+/g, "_");
[Link](validatedId);
...
}

In an [Link] web MVC application, for example:


Razor

@if ([Link])
{
<script>
[Link]("@[Link]
.Replace("\\", "\\\\")"
.replace(/[,;=| ]+/g, "_"));
</script>
}

It isn't necessary to use the user's actual sign-in name. It only has to be an ID that is unique to that user. It
must not include spaces or any of the characters ,;=| .
The user ID is also set in a session cookie and sent to the server. If the server SDK is installed, the
authenticated user ID is sent as part of the context properties of both client and server telemetry. You can
then filter and search on it.
If your app groups users into accounts, you can also pass an identifier for the account (with the same
character restrictions).

[Link](validatedId, accountId);

In Metrics Explorer, you can create a chart that counts Users, Authenticated, and User accounts.
You can also search for client data points with specific user names and accounts.

Filtering, searching, and segmenting your data by using


properties
You can attach properties and measurements to your events (and also to metrics, page views, exceptions,
and other telemetry data).
Properties are string values that you can use to filter your telemetry in the usage reports. For example, if
your app provides several games, you can attach the name of the game to each event so that you can see
which games are more popular.
There's a limit of 8192 on the string length. (If you want to send large chunks of data, use the message
parameter of TrackTrace.)
Metrics are numeric values that can be presented graphically. For example, you might want to see if
there's a gradual increase in the scores that your gamers achieve. The graphs can be segmented by the
properties that are sent with the event, so that you can get separate or stacked graphs for different
games.
For metric values to be correctly displayed, they should be greater than or equal to 0.
There are some limits on the number of properties, property values, and metrics that you can use.
JavaScript

[Link]
("WinGame",
// String properties:
{Game: [Link], Difficulty: [Link]},
// Numeric metrics:
{Score: [Link], Opponents: [Link]}
);

[Link]
("page name", "[Link]
// String properties:
{Game: [Link], Difficulty: [Link]},
// Numeric metrics:
{Score: [Link], Opponents: [Link]}
);

C#

// Set up some properties and metrics:


var properties = new Dictionary <string, string>
{{"game", [Link]}, {"difficulty", [Link]}};
var metrics = new Dictionary <string, double>
{{"Score", [Link]}, {"Opponents", [Link]}};

// Send the event:


[Link]("WinGame", properties, metrics);
[Link]

// Set up some properties and metrics:


var properties = {"game": [Link], "difficulty": [Link]};
var metrics = {"Score": [Link], "Opponents": [Link]};

// Send the event:


[Link]({name: "WinGame", properties: properties, measurements: metrics});

Visual Basic

' Set up some properties:


Dim properties = New Dictionary (Of String, String)
[Link]("game", [Link])
[Link]("difficulty", [Link])

Dim metrics = New Dictionary (Of String, Double)


[Link]("Score", [Link])
[Link]("Opponents", [Link])

' Send the event:


[Link]("WinGame", properties, metrics)

Java

Map<String, String> properties = new HashMap<String, String>();


[Link]("game", [Link]());
[Link]("difficulty", [Link]());

Map<String, Double> metrics = new HashMap<String, Double>();


[Link]("Score", [Link]());
[Link]("Opponents", [Link]());

[Link]("WinGame", properties, metrics);

NOTE
Take care not to log personally identifiable information in properties.

If you used metrics, open Metrics Explorer and select the metric from the Custom group:
NOTE
If your metric doesn't appear, or if the Custom heading isn't there, close the selection blade and try again later.
Metrics can sometimes take an hour to be aggregated through the pipeline.

If you used properties and metrics, segment the metric by the property:

In Diagnostic Search, you can view the properties and metrics of individual occurrences of an event.

Use the Search field to see event occurrences that have a particular property value.
Learn more about search expressions.
Alternative way to set properties and metrics
If it's more convenient, you can collect the parameters of an event in a separate object:

var event = new EventTelemetry();

[Link] = "WinGame";
[Link]["processingTime"] = [Link];
[Link]["game"] = [Link];
[Link]["difficulty"] = [Link];
[Link]["Score"] = [Link];
[Link]["Opponents"] = [Link];

[Link](event);

WARNING
Don't reuse the same telemetry item instance ( event in this example) to call Track*() multiple times. This may
cause telemetry to be sent with incorrect configuration.

Custom measurements and properties in Analytics


In Analytics, custom metrics and properties show in the customMeasurements and customDimensions
attributes of each telemetry record.
For example, if you have added a property named "game" to your request telemetry, this query counts
the occurrences of different values of "game", and show the average of the custom metric "score":
requests
| summarize sum(itemCount), avg(todouble([Link])) by
tostring([Link])

Notice that:
When you extract a value from the customDimensions or customMeasurements JSON, it has dynamic
type, and so you must cast it tostring or todouble .
To take account of the possibility of sampling, you should use sum(itemCount) , not count() .

Timing events
Sometimes you want to chart how long it takes to perform an action. For example, you might want to
know how long users take to consider choices in a game. You can use the measurement parameter for
this.
C#

var stopwatch = [Link]();

// ... perform the timed action ...

[Link]();

var metrics = new Dictionary <string, double>


{{"processingTime", [Link]}};

// Set up some properties:


var properties = new Dictionary <string, string>
{{"signalSource", [Link]}};

// Send the event:


[Link]("SignalProcessed", properties, metrics);

Default properties for custom telemetry


If you want to set default property values for some of the custom events that you write, you can set them
in a TelemetryClient instance. They are attached to every telemetry item that's sent from that client.
C#

using [Link];

var gameTelemetry = new TelemetryClient();


[Link]["Game"] = [Link];
// Now all telemetry will automatically be sent with the context property:
[Link]("WinGame");

Visual Basic

Dim gameTelemetry = New TelemetryClient()


[Link]("Game") = [Link]
' Now all telemetry will automatically be sent with the context property:
[Link]("WinGame")

Java
import [Link];
import [Link];
...

TelemetryClient gameTelemetry = new TelemetryClient();


TelemetryContext context = [Link]();
[Link]().put("Game", [Link]);

[Link]("WinGame");

[Link]

var gameTelemetry = new [Link]();


[Link]["Game"] = [Link];

[Link]({name: "WinGame"});

Individual telemetry calls can override the default values in their property dictionaries.
For JavaScript web clients, use JavaScript telemetry initializers.
To add properties to all telemetry, including the data from standard collection modules, implement
ITelemetryInitializer .

Sampling, filtering, and processing telemetry


You can write code to process your telemetry before it's sent from the SDK. The processing includes data
that's sent from the standard telemetry modules, such as HTTP request collection and dependency
collection.
Add properties to telemetry by implementing ITelemetryInitializer . For example, you can add version
numbers or values that are calculated from other properties.
Filtering can modify or discard telemetry before it's sent from the SDK by implementing
ITelemetryProcesor . You control what is sent or discarded, but you have to account for the effect on your
metrics. Depending on how you discard items, you might lose the ability to navigate between related
items.
Sampling is a packaged solution to reduce the volume of data that's sent from your app to the portal. It
does so without affecting the displayed metrics. And it does so without affecting your ability to diagnose
problems by navigating between related items such as exceptions, requests, and page views.
Learn more.

Disabling telemetry
To dynamically stop and start the collection and transmission of telemetry:
C#

using [Link];

[Link] = true;

To disable selected standard collectors--for example, performance counters, HTTP requests, or


dependencies--delete or comment out the relevant lines in [Link]. You can do this, for
example, if you want to send your own TrackRequest data.
[Link]

[Link] = true;

To disable selected standard collectors--for example, performance counters, HTTP requests, or


dependencies--at initialization time, chain configuration methods to your SDK initialization code:

[Link]()
.setAutoCollectRequests(false)
.setAutoCollectPerformance(false)
.setAutoCollectExceptions(false)
.setAutoCollectDependencies(false)
.setAutoCollectConsole(false)
.start();

To disable these collectors after initialization, use the Configuration object:


[Link](false)

Developer mode
During debugging, it's useful to have your telemetry expedited through the pipeline so that you can see
results immediately. You also get additional messages that help you trace any problems with the
telemetry. Switch it off in production, because it may slow down your app.
C#

[Link] = true;

Visual Basic

[Link] = True

Setting the instrumentation key for selected custom telemetry


C#

var telemetry = new TelemetryClient();


[Link] = "---my key---";
// ...

Dynamic instrumentation key


To avoid mixing up telemetry from development, test, and production environments, you can create
separate Application Insights resources and change their keys, depending on the environment.
Instead of getting the instrumentation key from the configuration file, you can set it in your code. Set the
key in an initialization method, such as [Link] in an [Link] service:
C#
protected void Application_Start()
{
[Link].
[Link] =
// - for example -
[Link]["ikey"];
...

JavaScript

[Link] = myKey;

In webpages, you might want to set it from the web server's state, rather than coding it literally into the
script. For example, in a webpage generated in an [Link] app:
JavaScript in Razor

<script type="text/javascript">
// Standard Application Insights webpage script:
var appInsights = [Link] || function(config){ ...
// Modify this part:
}({instrumentationKey:
// Generate from server property:
@[Link].
[Link]"
}) // ...

TelemetryContext
TelemetryClient has a Context property, which contains values that are sent along with all telemetry data.
They are normally set by the standard telemetry modules, but you can also set them yourself. For
example:

[Link] = "MyOperationName";

If you set any of these values yourself, consider removing the relevant line from
[Link], so that your values and the standard values don't get confused.
Component: The app and its version.
Device: Data about the device where the app is running. (In web apps, this is the server or client
device that the telemetry is sent from.)
InstrumentationKey: The Application Insights resource in Azure where the telemetry appear. It's
usually picked up from [Link].
Location: The geographic location of the device.
Operation: In web apps, the current HTTP request. In other app types, you can set this to group
events together.
Id: A generated value that correlates different events, so that when you inspect any event in
Diagnostic Search, you can find related items.
Name: An identifier, usually the URL of the HTTP request.
SyntheticSource: If not null or empty, a string that indicates that the source of the request has
been identified as a robot or web test. By default, it is excluded from calculations in Metrics
Explorer.
Properties: Properties that are sent with all telemetry data. It can be overridden in individual Track*
calls.
Session: The user's session. The ID is set to a generated value, which is changed when the user has
not been active for a while.
User: User information.

Limits
There are some limits on the number of metrics and events per application (that is, per instrumentation
key). Limits depend on the pricing plan that you choose.

RESOURCE DEFAULT LIMIT NOTE

Total data per day 100 GB You can reduce data by setting a
cap. If you need more, you can
increase the limit up to 1,000 GB
from the portal. For capacities
greater than 1,000 GB, send mail to
AIDataCap@[Link].

Free data per month 1 GB Additional data is charged per


(Basic price plan) gigabyte.

Throttling 32 k events/second The limit is measured over a minute.

Data retention 90 days This resource is for Search, Analytics,


and Metrics Explorer.

Availability multi-step test detailed 90 days This resource provides detailed


results retention results of each step.

Maximum event size 64 K

Property and metric name length 150 See type schemas

Property value string length 8,192 See type schemas

Trace and exception message length 10 k See type schemas

Availability tests count per app 100

Profiler data retention 5 days

Profiler data sent per day 10GB

For more information, see About pricing and quotas in Application Insights.
To avoid hitting the data rate limit, use sampling.
To determine how long data is kept, see Data retention and privacy.

Reference docs
[Link] reference
Java reference
JavaScript reference
Android SDK
iOS SDK

SDK code
[Link] Core SDK
[Link] 5
Windows Server packages
Java SDK
[Link] SDK
JavaScript SDK
All platforms

Questions
What exceptions might Track_() calls throw?
None. You don't need to wrap them in try-catch clauses. If the SDK encounters problems, it will log
messages in the debug console output and--if the messages get through--in Diagnostic Search.
Is there a REST API to get data from the portal?
Yes, the data access API. Other ways to extract data include export from Analytics to Power BI and
continuous export.

Next steps
Search events and logs
Troubleshooting
Track custom operations with Application Insights
.NET SDK
1/11/2018 • 13 min to read • Edit Online

Azure Application Insights SDKs automatically track incoming HTTP requests and calls to dependent services, such
as HTTP requests and SQL queries. Tracking and correlation of requests and dependencies give you visibility into
the whole application's responsiveness and reliability across all microservices that combine this application.
There is a class of application patterns that can't be supported generically. Proper monitoring of such patterns
requires manual code instrumentation. This article covers a few patterns that might require manual
instrumentation, such as custom queue processing and running long-running background tasks.
This document provides guidance on how to track custom operations with the Application Insights SDK. This
documentation is relevant for:
Application Insights for .NET (also known as Base SDK) version 2.4+.
Application Insights for web applications (running [Link]) version 2.4+.
Application Insights for [Link] Core version 2.1+.

Overview
An operation is a logical piece of work run by an application. It has a name, start time, duration, result, and a
context of execution like user name, properties, and result. If operation A was initiated by operation B, then
operation B is set as a parent for A. An operation can have only one parent, but it can have many child operations.
For more information on operations and telemetry correlation, see Azure Application Insights telemetry correlation.
In the Application Insights .NET SDK, the operation is described by the abstract class OperationTelemetry and its
descendants RequestTelemetry and DependencyTelemetry.

Incoming operations tracking


The Application Insights web SDK automatically collects HTTP requests for [Link] applications that run in an IIS
pipeline and all [Link] Core applications. There are community-supported solutions for other platforms and
frameworks. However, if the application isn't supported by any of the standard or community-supported solutions,
you can instrument it manually.
Another example that requires custom tracking is the worker that receives items from the queue. For some queues,
the call to add a message to this queue is tracked as a dependency. However, the high-level operation that
describes message processing is not automatically collected.
Let's see how such operations could be tracked.
On a high level, the task is to create RequestTelemetry and set known properties. After the operation is finished,
you track the telemetry. The following example demonstrates this task.
HTTP request in Owin self-hosted app
In this example, trace context is propagated according to the HTTP Protocol for Correlation. You should expect to
receive headers that are described there.

public class ApplicationInsightsMiddleware : OwinMiddleware


{
private readonly TelemetryClient telemetryClient = new TelemetryClient([Link]);
private readonly TelemetryClient telemetryClient = new TelemetryClient([Link]);

public ApplicationInsightsMiddleware(OwinMiddleware next) : base(next) {}

public override async Task Invoke(IOwinContext context)


{
// Let's create and start RequestTelemetry.
var requestTelemetry = new RequestTelemetry
{
Name = $"{[Link]} {[Link]([Link])}"
};

// If there is a Request-Id received from the upstream service, set the telemetry context accordingly.
if ([Link]("Request-Id"))
{
var requestId = [Link]("Request-Id");
// Get the operation ID from the Request-Id (if you follow the HTTP Protocol for Correlation).
[Link] = GetOperationId(requestId);
[Link] = requestId;
}

// StartOperation is a helper method that allows correlation of


// current operations with nested operations/telemetry
// and initializes start time and duration on telemetry items.
var operation = [Link](requestTelemetry);

// Process the request.


try
{
await [Link](context);
}
catch (Exception e)
{
[Link] = false;
[Link](e);
throw;
}
finally
{
// Update status code and success as appropriate.
if ([Link] != null)
{
[Link] = [Link]();
[Link] = [Link] >= 200 && [Link]
<= 299;
}
else
{
[Link] = false;
}

// Now it's time to stop the operation (and track telemetry).


[Link](operation);
}
}

public static string GetOperationId(string id)


{
// Returns the root ID from the '|' to the first '.' if any.
int rootEnd = [Link]('.');
if (rootEnd < 0)
rootEnd = [Link];

int rootStart = id[0] == '|' ? 1 : 0;


return [Link](rootStart, rootEnd - rootStart);
}
}
The HTTP Protocol for Correlation also declares the Correlation-Context header. However, it's omitted here for
simplicity.

Queue instrumentation
While there is HTTP Protocol for Correlation to pass correlation details with HTTP request, every queue protocol
has to define how the same details are passed along the queue message. Some queue protocols (such as AMQP)
allow passing additional metadata and some others (such Azure Storage Queue) require the context to be encoded
into the message payload.
Service Bus Queue
Application Insights tracks Service Bus Messaging calls with the new Microsoft Azure ServiceBus Client for .NET
version 3.0.0 and higher. If you use message handler pattern to process messages, you are done: all Service Bus
calls done by your service are automatically tracked and correlated with other telemetry items. Refer to the Service
Bus client tracing with Microsoft Application Insights if you manually process messages.
If you use [Link] package, read further - following examples demonstrate how to track (and
correlate) calls to the Service Bus as Service Bus queue uses AMQP protocol and Application Insights doesn't
automatically track queue operations. Correlation identifiers are passed in the message properties.
Enqueue

public async Task Enqueue(string payload)


{
// StartOperation is a helper method that initializes the telemetry item
// and allows correlation of this operation with its parent and children.
var operation = [Link]<DependencyTelemetry>("enqueue " + queueName);
[Link] = "Queue";
[Link] = "Enqueue " + queueName;

var message = new BrokeredMessage(payload);


// Service Bus queue allows the property bag to pass along with the message.
// We will use them to pass our correlation identifiers (and other context)
// to the consumer.
[Link]("ParentId", [Link]);
[Link]("RootId", [Link]);

try
{
await [Link](message);

// Set [Link] Success and ResponseCode here.


[Link] = true;
}
catch (Exception e)
{
[Link](e);
// Set [Link] Success and ResponseCode here.
[Link] = false;
throw;
}
finally
{
[Link](operation);
}
}

Process
public async Task Process(BrokeredMessage message)
{
// After the message is taken from the queue, create RequestTelemetry to track its processing.
// It might also make sense to get the name from the message.
RequestTelemetry requestTelemetry = new RequestTelemetry { Name = "Dequeue " + queueName };

var rootId = [Link]["RootId"].ToString();


var parentId = [Link]["ParentId"].ToString();
// Get the operation ID from the Request-Id (if you follow the HTTP Protocol for Correlation).
[Link] = rootId;
[Link] = parentId;

var operation = [Link](requestTelemetry);

try
{
await ProcessMessage();
}
catch (Exception e)
{
[Link](e);
throw;
}
finally
{
// Update status code and success as appropriate.
[Link](operation);
}
}

Azure Storage queue


The following example shows how to track the Azure Storage queue operations and correlate telemetry between
the producer, the consumer, and Azure Storage.
The Storage queue has an HTTP API. All calls to the queue are tracked by the Application Insights Dependency
Collector for HTTP requests. Make sure you have
[Link] in
[Link] . If you don't have it, add it programmatically as described in Filtering and
Preprocessing in the Azure Application Insights SDK.
If you configure Application Insights manually, make sure you create and initialize
[Link] similarly to:

DependencyTrackingTelemetryModule module = new DependencyTrackingTelemetryModule();

// You can prevent correlation header injection to some domains by adding it to the excluded list.
// Make sure you add a Storage endpoint. Otherwise, you might experience request signature validation issues
on the Storage service side.
[Link]("[Link]");
[Link]([Link]);

// Do not forget to dispose of the module during application shutdown.

You also might want to correlate the Application Insights operation ID with the Storage request ID. For information
on how to set and get a Storage request client and a server request ID, see Monitor, diagnose, and troubleshoot
Azure Storage.
Enqueue
Because Storage queues support the HTTP API, all operations with the queue are automatically tracked by
Application Insights. In many cases, this instrumentation should be enough. However, to correlate traces on the
consumer side with producer traces, you must pass some correlation context similarly to how we do it in the HTTP
Protocol for Correlation.
This example shows how to track the Enqueue operation. You can:
Correlate retries (if any): They all have one common parent that's the Enqueue operation. Otherwise, they're
tracked as children of the incoming request. If there are multiple logical requests to the queue, it might be
difficult to find which call resulted in retries.
Correlate Storage logs (if and when needed): They're correlated with Application Insights telemetry.
The Enqueue operation is the child of a parent operation (for example, an incoming HTTP request). The HTTP
dependency call is the child of the Enqueue operation and the grandchild of the incoming request:

public async Task Enqueue(CloudQueue queue, string message)


{
var operation = [Link]<DependencyTelemetry>("enqueue " + [Link]);
[Link] = "Queue";
[Link] = "Enqueue " + [Link];

// MessagePayload represents your custom message and also serializes correlation identifiers into payload.
// For example, if you choose to pass payload serialized to JSON, it might look like
// {'RootId' : 'some-id', 'ParentId' : '|some-id.1.2.3.', 'message' : 'your message to process'}
var jsonPayload = [Link](new MessagePayload
{
RootId = [Link],
ParentId = [Link],
Payload = message
});

CloudQueueMessage queueMessage = new CloudQueueMessage(jsonPayload);

// Add [Link] to the OperationContext to correlate Storage logs and Application Insights
telemetry.
OperationContext context = new OperationContext { ClientRequestID = [Link]};

try
{
await [Link](queueMessage, null, null, new QueueRequestOptions(), context);
}
catch (StorageException e)
{
[Link]("AzureServiceRequestID", [Link]);
[Link] = false;
[Link] = [Link]();
[Link](e);
}
finally
{
// Update status code and success as appropriate.
[Link](operation);
}
}

To reduce the amount of telemetry your application reports or if you don't want to track the Enqueue operation for
other reasons, use the Activity API directly:
Create (and start) a new Activity instead of starting the Application Insights operation. You do not need to
assign any properties on it except the operation name.
Serialize [Link] into the message payload instead of [Link] . You can also use
[Link] .

Dequeue
Similarly to Enqueue , an actual HTTP request to the Storage queue is automatically tracked by Application Insights.
However, the Enqueue operation presumably happens in the parent context, such as an incoming request context.
Application Insights SDKs automatically correlate such an operation (and its HTTP part) with the parent request and
other telemetry reported in the same scope.
The Dequeue operation is tricky. The Application Insights SDK automatically tracks HTTP requests. However, it
doesn't know the correlation context until the message is parsed. It's not possible to correlate the HTTP request to
get the message with the rest of the telemetry.
In many cases, it might be useful to correlate the HTTP request to the queue with other traces as well. The following
example demonstrates how to do it:

public async Task<MessagePayload> Dequeue(CloudQueue queue)


{
var telemetry = new DependencyTelemetry
{
Type = "Queue",
Name = "Dequeue " + [Link]
};

[Link]();

try
{
var message = await [Link]();

if (message != null)
{
var payload = [Link]<MessagePayload>([Link]);

// If there is a message, we want to correlate the Dequeue operation with processing.


// However, we will only know what correlation ID to use after we get it from the message,
// so we will report telemetry after we know the IDs.
[Link] = [Link];
[Link] = [Link];

// Delete the message.


return payload;
}
}
catch (StorageException e)
{
[Link]("AzureServiceRequestID", [Link]);
[Link] = false;
[Link] = [Link]();
[Link](e);
}
finally
{
// Update status code and success as appropriate.
[Link]();
[Link](telemetry);
}

return null;
}

Process
In the following example, an incoming message is tracked in a manner similarly to incoming HTTP request:
public async Task Process(MessagePayload message)
{
// After the message is dequeued from the queue, create RequestTelemetry to track its processing.
RequestTelemetry requestTelemetry = new RequestTelemetry { Name = "Dequeue " + queueName };
// It might also make sense to get the name from the message.
[Link] = [Link];
[Link] = [Link];

var operation = [Link](requestTelemetry);

try
{
await ProcessMessage();
}
catch (Exception e)
{
[Link](e);
throw;
}
finally
{
// Update status code and success as appropriate.
[Link](operation);
}
}

Similarly, other queue operations can be instrumented. A peek operation should be instrumented in a similar way
as a dequeue operation. Instrumenting queue management operations isn't necessary. Application Insights tracks
operations such as HTTP, and in most cases, it's enough.
When you instrument message deletion, make sure you set the operation (correlation) identifiers. Alternatively, you
can use the Activity API. Then you don't need to set operation identifiers on the telemetry items because
Application Insights SDK does it for you:
Create a new Activity after you've got an item from the queue.
Use [Link]([Link]) to correlate consumer and producer logs.
Start the Activity .
Track dequeue, process, and delete operations by using Start/StopOperation helpers. Do it from the same
asynchronous control flow (execution context). In this way, they're correlated properly.
Stop the Activity .
Use Start/StopOperation , or call Track telemetry manually.
Batch processing
With some queues, you can dequeue multiple messages with one request. Processing such messages is
presumably independent and belongs to the different logical operations. In this case, it's not possible to correlate
the Dequeue operation to particular message processing.
Each message should be processed in its own asynchronous control flow. For more information, see the Outgoing
dependencies tracking section.

Long-running background tasks


Some applications start long-running operations that might be caused by user requests. From the
tracing/instrumentation perspective, it's not different from request or dependency instrumentation:
async Task BackgroundTask()
{
var operation = [Link]<RequestTelemetry>(taskName);
[Link] = "Background";
try
{
int progress = 0;
while (progress < 100)
{
// Process the task.
[Link]($"done {progress++}%");
}
// Update status code and success as appropriate.
}
catch (Exception e)
{
[Link](e);
// Update status code and success as appropriate.
throw;
}
finally
{
[Link](operation);
}
}

In this example, [Link] creates RequestTelemetry and fills the correlation context. Let's
say you have a parent operation that was created by incoming requests that scheduled the operation. As long as
BackgroundTask starts in the same asynchronous control flow as an incoming request, it's correlated with that
parent operation. BackgroundTask and all nested telemetry items are automatically correlated with the request that
caused it, even after the request ends.
When the task starts from the background thread that doesn't have any operation ( Activity ) associated with it,
BackgroundTask doesn't have any parent. However, it can have nested operations. All telemetry items reported
from the task are correlated to the RequestTelemetry created in BackgroundTask .

Outgoing dependencies tracking


You can track your own dependency kind or an operation that's not supported by Application Insights.
The Enqueue method in the Service Bus queue or the Storage queue can serve as examples for such custom
tracking.
The general approach for custom dependency tracking is to:
Call the [Link] (extension) method that fills the DependencyTelemetry properties that
are needed for correlation and some other properties (start time stamp, duration).
Set other custom properties on the DependencyTelemetry , such as the name and any other context you need.
Make a dependency call and wait for it.
Stop the operation with StopOperation when it's finished.
Handle exceptions.
public async Task RunMyTaskAsync()
{
using (var operation = [Link]<DependencyTelemetry>("task 1"))
{
try
{
var myTask = await StartMyTaskAsync();
// Update status code and success as appropriate.
}
catch(...)
{
// Update status code and success as appropriate.
}
}
}

Disposing operation causes operation to be stopped, so you may do it instead of calling StopOperation .
Warning: in some cases unhanded exception may prevent finally to be called so operations may not be tracked.
Parallel operations processing and tracking
StopOperation only stops the operation that was started. If the current running operation doesn't match the one
you want to stop, StopOperation does nothing. This situation might happen if you start multiple operations in
parallel in the same execution context:

var firstOperation = [Link]<DependencyTelemetry>("task 1");


var firstOperation = [Link]<DependencyTelemetry>("task 1");
var firstTask = RunMyTaskAsync();

var secondOperation = [Link]<DependencyTelemetry>("task 2");


var secondTask = RunMyTaskAsync();

await firstTask;

// FAILURE!!! This will do nothing and will not report telemetry for the first operation
// as currently secondOperation is active.
[Link](firstOperation);

await secondTask;

Make sure you always call StartOperation and process operation in the same async method to isolate operations
running in parallel. If operation is synchronous (or not async), wrap process and track with [Link] :

public void RunMyTask(string name)


{
using (var operation = [Link]<DependencyTelemetry>(name))
{
Process();
// Update status code and success as appropriate.
}
}

public async Task RunAllTasks()


{
var task1 = [Link](() => RunMyTask("task 1"));
var task2 = [Link](() => RunMyTask("task 2"));

await [Link](task1, task2);


}
Next steps
Learn the basics of telemetry correlation in Application Insights.
See the data model for Application Insights types and data model.
Report custom events and metrics to Application Insights.
Check out standard configuration for context properties collection.
Check the [Link] User Guide to see how we correlate telemetry.
Filtering and preprocessing telemetry in the
Application Insights SDK
11/20/2017 • 7 min to read • Edit Online

You can write and configure plug-ins for the Application Insights SDK to customize how telemetry is captured
and processed before it is sent to the Application Insights service.
Sampling reduces the volume of telemetry without affecting your statistics. It keeps together related data
points so that you can navigate between them when diagnosing a problem. In the portal, the total counts are
multiplied to compensate for the sampling.
Filtering with Telemetry Processors for [Link] or Java lets you select or modify telemetry in the SDK before
it is sent to the server. For example, you could reduce the volume of telemetry by excluding requests from
robots. But filtering is a more basic approach to reducing traffic than sampling. It allows you more control
over what is transmitted, but you have to be aware that it affects your statistics - for example, if you filter out
all successful requests.
Telemetry Initializers add properties to any telemetry sent from your app, including telemetry from the
standard modules. For example, you could add calculated values; or version numbers by which to filter the
data in the portal.
The SDK API is used to send custom events and metrics.
Before you start:
Install the Application Insights SDK for [Link] or SDK for Java in your app.

Filtering: ITelemetryProcessor
This technique gives you more direct control over what is included or excluded from the telemetry stream. You
can use it in conjunction with Sampling, or separately.
To filter telemetry, you write a telemetry processor and register it with the SDK. All telemetry goes through your
processor, and you can choose to drop it from the stream, or add properties. This includes telemetry from the
standard modules such as the HTTP request collector and the dependency collector, as well as telemetry you
have written yourself. You can, for example, filter out telemetry about requests from robots, or successful
dependency calls.

WARNING
Filtering the telemetry sent from the SDK using processors can skew the statistics that you see in the portal, and make it
difficult to follow related items.
Instead, consider using sampling.

Create a telemetry processor (C#)


1. Verify that the Application Insights SDK in your project is version 2.0.0 or later. Right-click your project in
Visual Studio Solution Explorer and choose Manage NuGet Packages. In NuGet package manager, check
[Link].
2. To create a filter, implement ITelemetryProcessor. This is another extensibility point like telemetry module,
telemetry initializer, and telemetry channel.
Notice that Telemetry Processors construct a chain of processing. When you instantiate a telemetry
processor, you pass a link to the next processor in the chain. When a telemetry data point is passed to the
Process method, it does its work and then calls the next Telemetry Processor in the chain.

using [Link];
using [Link];

public class SuccessfulDependencyFilter : ITelemetryProcessor


{

private ITelemetryProcessor Next { get; set; }

// You can pass values from .config


public string MyParamFromConfigFile { get; set; }

// Link processors to each other in a chain.


public SuccessfulDependencyFilter(ITelemetryProcessor next)
{
[Link] = next;
}
public void Process(ITelemetry item)
{
// To filter out an item, just return
if (!OKtoSend(item)) { return; }
// Modify the item if required
ModifyItem(item);

[Link](item);
}

// Example: replace with your own criteria.


private bool OKtoSend (ITelemetry item)
{
var dependency = item as DependencyTelemetry;
if (dependency == null) return true;

return [Link] != true;


}

// Example: replace with your own modifiers.


private void ModifyItem (ITelemetry item)
{
[Link]("app-version", "1." + MyParamFromConfigFile);
}
}

3. Insert this in [Link]:

<TelemetryProcessors>
<Add Type="[Link], WebApplication9">
<!-- Set public property -->
<MyParamFromConfigFile>2-beta</MyParamFromConfigFile>
</Add>
</TelemetryProcessors>

(This is the same section where you initialize a sampling filter.)


You can pass string values from the .config file by providing public named properties in your class.
WARNING
Take care to match the type name and any property names in the .config file to the class and property names in the code.
If the .config file references a non-existent type or property, the SDK may silently fail to send any telemetry.

Alternatively, you can initialize the filter in code. In a suitable initialization class - for example AppStart in
[Link] - insert your processor into the chain:

var builder = [Link];


[Link]((next) => new SuccessfulDependencyFilter(next));

// If you have more processors:


[Link]((next) => new AnotherProcessor(next));

[Link]();

TelemetryClients created after this point will use your processors.


The following code shows how to add a telemetry initializer in [Link] Core.

public void Configure(IApplicationBuilder app, IHostingEnvironment env)


{
var initializer = new SuccessfulDependencyFilter();
var configuration = [Link]<TelemetryConfiguration>();
[Link](initializer);
}

Example filters
Synthetic requests
Filter out bots and web tests. Although Metrics Explorer gives you the option to filter out synthetic sources, this
option reduces traffic by filtering them at the SDK.

public void Process(ITelemetry item)


{
if (![Link]([Link])) {return;}

// Send everything else:


[Link](item);
}

Failed authentication
Filter out requests with a "401" response.
public void Process(ITelemetry item)
{
var request = item as RequestTelemetry;

if (request != null &&


[Link]("401", [Link]))
{
// To filter out an item, just terminate the chain:
return;
}
// Send everything else:
[Link](item);
}

Filter out fast remote dependency calls


If you only want to diagnose calls that are slow, filter out the fast ones.

NOTE
This will skew the statistics you see on the portal. The dependency chart will look as if the dependency calls are all failures.

public void Process(ITelemetry item)


{
var request = item as DependencyTelemetry;

if (request != null && [Link] < 100)


{
return;
}
[Link](item);
}

Diagnose dependency issues


This blog describes a project to diagnose dependency issues by automatically sending regular pings to
dependencies.

Add properties: ITelemetryInitializer


Use telemetry initializers to define global properties that are sent with all telemetry; and to override selected
behavior of the standard telemetry modules.
For example, the Application Insights for Web package collects telemetry about HTTP requests. By default, it flags
as failed any request with a response code >= 400. But if you want to treat 400 as a success, you can provide a
telemetry initializer that sets the Success property.
If you provide a telemetry initializer, it is called whenever any of the Track*() methods is called. This includes
methods called by the standard telemetry modules. By convention, these modules do not set any property that
has already been set by an initializer.
Define your initializer
C#
using System;
using [Link];
using [Link];
using [Link];

namespace [Link]
{
/*
* Custom TelemetryInitializer that overrides the default SDK
* behavior of treating response codes >= 400 as failed requests
*
*/
public class MyTelemetryInitializer : ITelemetryInitializer
{
public void Initialize(ITelemetry telemetry)
{
var requestTelemetry = telemetry as RequestTelemetry;
// Is this a TrackRequest() ?
if (requestTelemetry == null) return;
int code;
bool parsed = [Link]([Link], out code);
if (!parsed) return;
if (code >= 400 && code < 500)
{
// If we set the Success property, the SDK won't change it:
[Link] = true;
// Allow us to filter these requests in the portal:
[Link]["Overridden400s"] = "true";
}
// else leave the SDK to set the Success property
}
}
}

Load your initializer


In [Link]:

<ApplicationInsights>
<TelemetryInitializers>
<!-- Fully qualified type name, assembly name: -->
<Add Type="[Link], MvcWebRole"/>
...
</TelemetryInitializers>
</ApplicationInsights>

Alternatively, you can instantiate the initializer in code, for example in [Link]:

protected void Application_Start()


{
// ...
[Link]
.Add(new MyTelemetryInitializer());
}

See more of this sample.


JavaScript telemetry initializers
JavaScript
Insert a telemetry initializer immediately after the initialization code that you got from the portal:
<script type="text/javascript">
// ... initialization code
...({
instrumentationKey: "your instrumentation key"
});
[Link] = appInsights;

// Adding telemetry initializer.


// This is called whenever a new telemetry item
// is created.

[Link](function () {
[Link](function (envelope) {
var telemetryItem = [Link];

// To check the telemetry item’s type - for example PageView:


if ([Link] == [Link]) {
// this statement removes url from all page view documents
[Link] = "URL CENSORED";
}

// To set custom properties:


[Link] = [Link] || {};
[Link]["globalProperty"] = "boo";

// To set custom metrics:


[Link] = [Link] || {};
[Link]["globalMetric"] = 100;
});
});

// End of inserted code.

[Link]();
</script>

For a summary of the non-custom properties available on the telemetryItem, see Application Insights Export
Data Model.
You can add as many initializers as you like.

ITelemetryProcessor and ITelemetryInitializer


What's the difference between telemetry processors and telemetry initializers?
There are some overlaps in what you can do with them: both can be used to add properties to telemetry.
TelemetryInitializers always run before TelemetryProcessors.
TelemetryProcessors allow you to completely replace or discard a telemetry item.
TelemetryProcessors don't process performance counter telemetry.

Troubleshooting [Link]
Confirm that the fully qualified type name and assembly name are correct.
Confirm that the [Link] file is in your output directory and contains any recent changes.

Reference docs
API Overview
[Link] reference
SDK Code
[Link] Core SDK
[Link] SDK
JavaScript SDK

Next steps
Search events and logs
Sampling
Troubleshooting
Sampling in Application Insights
11/28/2017 • 15 min to read • Edit Online

Sampling is a feature in Azure Application Insights. It is the recommended way to reduce telemetry traffic
and storage, while preserving a statistically correct analysis of application data. The filter selects items that
are related, so that you can navigate between items when you are doing diagnostic investigations. When
metric counts are presented to you in the portal, they are renormalized to take account of the sampling, to
minimize any effect on the statistics.
Sampling reduces traffic and data costs, and helps you avoid throttling.

In brief:
Sampling retains 1 in n records and discards the rest. For example, it might retain 1 in 5 events, a
sampling rate of 20%.
Sampling happens automatically if your application sends a lot of telemetry, in [Link] web server
apps.
You can also set sampling manually, either in the portal on the pricing page; or in the [Link] SDK in
the .config file, to also reduce the network traffic.
If you log custom events and you want to make sure that a set of events is either retained or discarded
together, make sure that they have the same OperationId value.
The sampling divisor n is reported in each record in the property itemCount , which in Search appears
under the friendly name "request count" or "event count". When sampling is not in operation,
itemCount==1 .
If you write Analytics queries, you should take account of sampling. In particular, instead of simply
counting records, you should use summarize sum(itemCount) .

Types of sampling
There are three alternative sampling methods:
Adaptive sampling automatically adjusts the volume of telemetry sent from the SDK in your [Link]
app. Beginning with SDK v 2.0.0-beta3 this is the default sampling method. Adaptive sampling is
currently only available for [Link] server-side telemetry.
Fixed-rate sampling reduces the volume of telemetry sent from both your [Link] server and from
your users' browsers. You set the rate. The client and server will synchronize their sampling so that, in
Search, you can navigate between related page views and requests.
Ingestion sampling works in the Azure portal. It discards some of the telemetry that arrives from your
app, at a sampling rate that you set. It doesn't reduce telemetry traffic sent from your app, but helps
you keep within your monthly quota. The main advantage of ingestion sampling is that you can set the
sampling rate without redeploying your app, and it works uniformly for all servers and clients.
If Adaptive or Fixed rate sampling are in operation, Ingestion sampling is disabled.

Ingestion sampling
This form of sampling operates at the point where the telemetry from your web server, browsers, and
devices reaches the Application Insights service endpoint. Although it doesn't reduce the telemetry traffic
sent from your app, it does reduce the amount processed and retained (and charged for) by Application
Insights.
Use this type of sampling if your app often goes over its monthly quota and you don't have the option of
using either of the SDK-based types of sampling.
Set the sampling rate in the Quotas and Pricing blade:

Like other types of sampling, the algorithm retains related telemetry items. For example, when you're
inspecting the telemetry in Search, you'll be able to find the request related to a particular exception.
Metric counts such as request rate and exception rate are correctly retained.
Data points that are discarded by sampling are not available in any Application Insights feature such as
Continuous Export.
Ingestion sampling doesn't operate while SDK-based adaptive or fixed-rate sampling is in operation. Note
that adaptive sampling is enabled by default when [Link] SDK is enabled in Visual Studio or by using
Status Monitor, and ingestion sampling is disabled. If the sampling rate at the SDK is less than 100%, then
the ingestion sampling rate that you set is ignored.

WARNING
The value shown on the tile indicates the value that you set for ingestion sampling. It doesn't represent the actual
sampling rate if SDK sampling is in operation.

Adaptive sampling at your web server


Adaptive sampling is available for the Application Insights SDK for [Link] v 2.0.0-beta3 and later, and is
enabled by default.
Adaptive sampling affects the volume of telemetry sent from your web server app to the Application
Insights service endpoint. The volume is adjusted automatically to keep within a specified maximum rate of
traffic.
It doesn't operate at low volumes of telemetry, so an app in debugging or a website with low usage won't
be affected.
To achieve the target volume, some of the generated telemetry is discarded. But like other types of
sampling, the algorithm retains related telemetry items. For example, when you're inspecting the telemetry
in Search, you'll be able to find the request related to a particular exception.
Metric counts such as request rate and exception rate are adjusted to compensate for the sampling rate, so
that they show approximately correct values in Metric Explorer.
Update NuGet packages
Update your project's NuGet packages to the latest pre-release version of Application Insights. In Visual
Studio, right-click the project in Solution Explorer, choose Manage NuGet Packages, check Include
prerelease and search for [Link].
Configuring adaptive sampling
In [Link], you can adjust several parameters in the AdaptiveSamplingTelemetryProcessor
node. The figures shown are the default values:
<MaxTelemetryItemsPerSecond>5</MaxTelemetryItemsPerSecond>

The target rate that the adaptive algorithm aims for on each server host. If your web app runs on
many hosts, reduce this value so as to remain within your target rate of traffic at the Application
Insights portal.
<EvaluationInterval>[Link]</EvaluationInterval>

The interval at which the current rate of telemetry is re-evaluated. Evaluation is performed as a
moving average. You might want to shorten this interval if your telemetry is liable to sudden bursts.
<SamplingPercentageDecreaseTimeout>[Link]</SamplingPercentageDecreaseTimeout>

When sampling percentage value changes, how soon after are we allowed to lower sampling
percentage again to capture less data.
<SamplingPercentageIncreaseTimeout>[Link]</SamplingPercentageIncreaseTimeout>

When sampling percentage value changes, how soon after are we allowed to increase sampling
percentage again to capture more data.
<MinSamplingPercentage>0.1</MinSamplingPercentage>

As sampling percentage varies, what is the minimum value we're allowed to set.
<MaxSamplingPercentage>100.0</MaxSamplingPercentage>

As sampling percentage varies, what is the maximum value we're allowed to set.
<MovingAverageRatio>0.25</MovingAverageRatio>

In the calculation of the moving average, the weight assigned to the most recent value. Use a value
equal to or less than 1. Smaller values make the algorithm less reactive to sudden changes.
<InitialSamplingPercentage>100</InitialSamplingPercentage>

The value assigned when the app has just started. Don't reduce this while you're debugging.
<ExcludedTypes>Trace;Exception</ExcludedTypes>

A semi-colon delimited list of types that you do not want to be sampled. Recognized types are:
Dependency, Event, Exception, PageView, Request, Trace. All instances of the specified types are
transmitted; the types that are not specified are sampled.
<IncludedTypes>Request;Dependency</IncludedTypes>

A semi-colon delimited list of types that you want to be sampled. Recognized types are:
Dependency, Event, Exception, PageView, Request, Trace. The specified types are sampled; all
instances of the other types will always be transmitted.
To switch off adaptive sampling, remove the AdaptiveSamplingTelemetryProcessor node from
applicationinsights-config.
Alternative: configure adaptive sampling in code
Instead of setting the sampling parameter in the .config file, you can programmatically set these values.
This allows you to specify a callback function that is invoked whenever the sampling rate is re-evaluated.
You could use this, for example, to find out what sampling rate is being used.
Remove the AdaptiveSamplingTelemetryProcessor node from the .config file.
C#

using [Link];
using [Link];
using [Link];
using [Link];
...

var adaptiveSamplingSettings = new SamplingPercentageEstimatorSettings();

// Optional: here you can adjust the settings from their defaults.

var builder = [Link];

[Link](
adaptiveSamplingSettings,

// Callback on rate re-evaluation:


(double afterSamplingTelemetryItemRatePerSecond,
double currentSamplingPercentage,
double newSamplingPercentage,
bool isSamplingPercentageChanged,
SamplingPercentageEstimatorSettings s
) =>
{
if (isSamplingPercentageChanged)
{
// Report the sampling rate.
[Link]("samplingPercentage", newSamplingPercentage);
}
});

// If you have other telemetry processors:


[Link]((next) => new AnotherProcessor(next));

[Link]();

(Learn about telemetry processors.)

Sampling for web pages with JavaScript


You can configure web pages for fixed-rate sampling from any server.
When you configure the web pages for Application Insights, modify the JavaScript snippet that you get
from the Application Insights portal. (In [Link] apps, the snippet typically goes in _Layout.cshtml.) Insert
a line like samplingPercentage: 10, before the instrumentation key:
<script>
var appInsights= ...
}({

// Value must be 100/N where N is an integer.


// Valid examples: 50, 25, 20, 10, 5, 1, 0.1, ...
samplingPercentage: 10,

instrumentationKey:...
});

[Link]=appInsights;
[Link]();
</script>

For the sampling percentage, choose a percentage that is close to 100/N where N is an integer. Currently
sampling doesn't support other values.
If you also enable fixed-rate sampling at the server, the clients and server will synchronize so that, in
Search, you can navigate between related page views and requests.

Fixed-rate sampling for [Link] web sites


Fixed rate sampling reduces the traffic sent from your web server and web browsers. Unlike adaptive
sampling, it reduces telemetry at a fixed rate decided by you. It also synchronizes the client and server
sampling so that related items are retained - for example, when you look at a page view in Search, you can
find its related request.
The sampling algorithm retains related items. For each HTTP request event, the request and its related
events are either discarded or transmitted together.
In Metrics Explorer, rates such as request and exception counts are multiplied by a factor to compensate
for the sampling rate, so that they are approximately correct.
Configuring fixed-rate sampling
1. Update your project's NuGet packages to the latest pre-release version of Application Insights. In
Visual Studio, right-click the project in Solution Explorer, choose Manage NuGet Packages, check
Include prerelease and search for [Link].
2. Disable adaptive sampling: In [Link], remove or comment out the
AdaptiveSamplingTelemetryProcessor node.

<TelemetryProcessors>

<!-- Disabled adaptive sampling:


<Add
Type="[Link]
cessor, [Link]">
<MaxTelemetryItemsPerSecond>5</MaxTelemetryItemsPerSecond>
</Add>
-->

3. Enable the fixed-rate sampling module. Add this snippet to [Link]:


<TelemetryProcessors>
<Add
Type="[Link],
[Link]">

<!-- Set a percentage close to 100/N where N is an integer. -->


<!-- E.g. 50 (=100/2), 33.33 (=100/3), 25 (=100/4), 20, 1 (=100/100), 0.1 (=100/1000) -->
<SamplingPercentage>10</SamplingPercentage>
</Add>
</TelemetryProcessors>

NOTE
For the sampling percentage, choose a percentage that is close to 100/N where N is an integer. Currently sampling
doesn't support other values.

Alternative: enable fixed-rate sampling in your server code


Instead of setting the sampling parameter in the .config file, you can programmatically set these values.
C#

using [Link];
using [Link];
...

var builder = [Link]();


[Link](10.0); // percentage

// If you have other telemetry processors:


[Link]((next) => new AnotherProcessor(next));

[Link]();

(Learn about telemetry processors.)

When to use sampling?


Adaptive sampling is automatically enabled if you use the [Link] SDK version 2.0.0-beta3 or later.
Regardless of which version of the SDK you use, you can enable ingestion sampling to allow Application
Insights to sample the collected data.
In general, for most small and medium size applications you don’t need sampling. The most useful
diagnostic information and most accurate statistics are obtained by collecting data on all your user
activities.
The main advantages of sampling are:
Application Insights service drops ("throttles") data points when your app sends a very high rate of
telemetry in short time interval.
To keep within the quota of data points for your pricing tier.
To reduce network traffic from the collection of telemetry.
Which type of sampling should I use?
Use ingestion sampling if:
You often go through your monthly quota of telemetry.
You're using a version of the SDK that doesn't support sampling - for example, the Java SDK or
[Link] versions earlier than 2.
You're getting a lot of telemetry from your users' web browsers.
Use fixed-rate sampling if:
You're using the Application Insights SDK for [Link] web services version 2.0.0 or later, and
You want synchronized sampling between client and server, so that, when you're investigating events
in Search, you can navigate between related events on the client and server, such as page views and
http requests.
You are confident of the appropriate sampling percentage for your app. It should be high enough to get
accurate metrics, but below the rate that exceeds your pricing quota and the throttling limits.
Use adaptive sampling:
If the conditions to use the other forms of sampling do not apply, we recommend adaptive sampling. This
is enabled by default in the [Link] server SDK, version 2.0.0-beta3 or later. It will not reduce traffic until a
certain minimum rate is reached, therefore, low-use sites will not be affected.

How do I know whether sampling is in operation?


To discover the actual sampling rate no matter where it has been applied, use an Analytics query such as
this:

requests | where timestamp > ago(1d)


| summarize 100/avg(itemCount) by bin(timestamp, 1h)
| render areachart

In each retained record, itemCount indicates the number of original records that it represents, equal to 1 +
the number of previous discarded records.

How does sampling work?


Fixed-rate and adaptive sampling are a feature of the SDK in [Link] versions from 2.0.0 onwards.
Ingestion sampling is a feature of the Application Insights service, and can be in operation if the SDK is not
performing sampling.
The sampling algorithm decides which telemetry items to drop, and which ones to keep (whether it's in the
SDK or in the Application Insights service). The sampling decision is based on several rules that aim to
preserve all interrelated data points intact, maintaining a diagnostic experience in Application Insights that
is actionable and reliable even with a reduced data set. For example, if for a failed request your app sends
additional telemetry items (such as exception and traces logged from this request), sampling will not split
this request and other telemetry. It either keeps or drops them all together. As a result, when you look at
the request details in Application Insights, you can always see the request along with its associated
telemetry items.
For applications that define "user" (that is, most typical web applications), the sampling decision is based
on the hash of the user id, which means that all telemetry for any particular user is either preserved or
dropped. For the types of applications that don't define users (such as web services) the sampling decision
is based on the operation id of the request. Finally, for the telemetry items that neither have user nor
operation id set (for example telemetry items reported from asynchronous threads with no http context)
sampling simply captures a percentage of telemetry items of each type.
When presenting telemetry back to you, the Application Insights service adjusts the metrics by the same
sampling percentage that was used at the time of collection, to compensate for the missing data points.
Hence, when looking at the telemetry in Application Insights, the users are seeing statistically correct
approximations that are very close to the real numbers.
The accuracy of the approximation largely depends on the configured sampling percentage. Also, the
accuracy increases for applications that handle a large volume of generally similar requests from lots of
users. On the other hand, for applications that don't work with a significant load, sampling is not needed as
these applications can usually send all their telemetry while staying within the quota, without causing data
loss from throttling.

WARNING
Application Insights does not sample metrics and sessions telemetry types. Reduction in the precision can be highly
undesirable for these telemetry types.

Adaptive sampling
Adaptive sampling adds a component that monitors the current rate of transmission from the SDK, and
adjusts the sampling percentage to try to stay within the target maximum rate. The adjustment is
recalculated at regular intervals, and is based on a moving average of the outgoing transmission rate.

Sampling and the JavaScript SDK


The client-side (JavaScript) SDK participates in fixed-rate sampling in conjunction with the server-side SDK.
The instrumented pages will only send client-side telemetry from the same users for which the server-side
made its decision to "sample in." This logic is designed to maintain integrity of user session across client-
and server-sides. As a result, from any particular telemetry item in Application Insights you can find all
other telemetry items for this user or session.
My client and server-side telemetry don't show coordinated samples as you describe above.
Verify that you enabled fixed-rate sampling both on server and client.
Make sure that the SDK version is 2.0 or above.
Check that you set the same sampling percentage in both the client and server.

Frequently Asked Questions


Why isn't sampling a simple "collect X percent of each telemetry type"?
While this sampling approach would provide with a very high precision in metric approximations, it
would break ability to correlate diagnostic data per user, session, and request, which is critical for
diagnostics. Therefore, sampling works better with "collect all telemetry items for X percent of app
users", or "collect all telemetry for X percent of app requests" logic. For the telemetry items not
associated with the requests (such as background asynchronous processing), the fall back is to "collect
X percent of all items for each telemetry type."
Can the sampling percentage change over time?
Yes, adaptive sampling gradually changes the sampling percentage, based on the currently observed
volume of the telemetry.
If I use fixed-rate sampling, how do I know which sampling percentage will work the best for my app?
One way is to start with adaptive sampling, find out what rate it settles on (see the above question),
and then switch to fixed-rate sampling using that rate.
Otherwise, you have to guess. Analyze your current telemetry usage in Application Insights, observe
any throttling that is occurring, and estimate the volume of the collected telemetry. These three
inputs, together with your selected pricing tier, suggest how much you might want to reduce the
volume of the collected telemetry. However, an increase in the number of your users or some other
shift in the volume of telemetry might invalidate your estimate.
What happens if I configure sampling percentage too low?
Excessively low sampling percentage (over-aggressive sampling) reduces the accuracy of the
approximations, when Application Insights attempts to compensate the visualization of the data for the
data volume reduction. Also, diagnostic experience might be negatively impacted, as some of the
infrequently failing or slow requests may be sampled out.
What happens if I configure sampling percentage too high?
Configuring too high sampling percentage (not aggressive enough) results in an insufficient reduction
in the volume of the collected telemetry. You may still experience telemetry data loss related to
throttling, and the cost of using Application Insights might be higher than you planned due to overage
charges.
On what platforms can I use sampling?
Ingestion sampling can occur automatically for any telemetry above a certain volume, if the SDK is not
performing sampling. This would work, for example, if your app uses a Java server, or if you are using
an older version of the [Link] SDK.
If you're using [Link] SDK versions 2.0.0 and above (hosted either in Azure or on your own server),
you get adaptive sampling by default, but you can switch to fixed-rate as described above. With fixed-
rate sampling, the browser SDK automatically synchronizes to sample related events.
There are certain rare events I always want to see. How can I get them past the sampling module?
Initialize a separate instance of TelemetryClient with a new TelemetryConfiguration (not the default
Active one). Use that to send your rare events.

Next steps
Filtering can provide more strict control of what your SDK sends.
Manage pricing and data volume in Application
Insights
1/3/2018 • 12 min to read • Edit Online

Pricing for Azure Application Insights is based on data volume per application. Low usage during development or
for a small app is likely to be free, because there's a 1 GB monthly allowance of telemetry data.
Each Application Insights resource is charged as a separate service, and contributes to the bill for your
subscription to Azure.
There are two pricing plans. The default plan is called Basic. You can opt for the Enterprise plan, which has a daily
charge, but enables certain additional features such as continuous export.
If you have questions about how pricing works for Application Insights, feel free to post a question in our forum.

The price plans


See the Application Insights pricing page for current prices in your currency.
Basic plan
The Basic plan is the default when a new Application Insights resource is created, and will suffice for most
customers.
In the Basic plan, you are charged by data volume: number of bytes of telemetry received by Application
Insights. Data volume is measured as the size of the uncompressed JSON data package received by
Application Insights from your application. For tabular data imported into Analytics, the data volume is
measured as the uncompressed size of files sent to Application Insights.
Your first 1 GB for each app is free, so if you're just experimenting or developing, you're unlikely to have to
pay.
Live Metrics Stream data isn't counted for pricing purposes.
Continuous Export is available for an extra per-GB charge in the Basic plan.
Enterprise plan
In the Enterprise plan, your app can use all the features of Application Insights. Continuous Export and
Log Analytics connector are available without any extra charge in the Enterprise plan.
You pay per node that is sending telemetry for any apps in the Enterprise plan.
A node is a physical or virtual server machine, or a Platform-as-a-Service role instance, that hosts your
app.
Development machines, client browsers, and mobile devices are not counted as nodes.
If your app has several components that send telemetry, such as a web service and a back-end worker,
they are counted separately.
Live Metrics Stream data isn't counted for pricing purposes.* Across a subscription, your charges are
per node, not per app. If you have five nodes sending telemetry for 12 apps, then the charge is for five
nodes.
Although charges are quoted per month, you're charged only for any hour in which a node sends telemetry
from an app. The hourly charge is the quoted monthly charge / 744 (the number of hours in a 31-day month).
A data volume allocation of 200 MB per day is given for each node detected (with hourly granularity). Unused
data allocation is not carried over from one day to the next.
If you choose the Enterprise pricing option, each subscription gets a daily allowance of data based on
the number of nodes sending telemetry to the Application Insights resources in that subscription. So if
you have 5 nodes sending data all day, you will have a pooled allowance of 1 GB applied to all the
Application Insights resources in that subscription. It doesn't matter if certain nodes are sending more
data than other nodes because the included data is shared across all nodes. If, on a given day, the
Application Insights resources receive more data than is included in the daily data allocation for this
subscription, the per-GB overage data charges apply.
The daily data allowance is calculated as the number of hours in the day (using UTC) that each node is
sending telemetry divided by 24 times 200 MB. So if you have 4 nodes sending telemetry during 15 of
the 24 hours in the day, the included data for that day would be ((4 x 15) / 24) x 200 MB = 500 MB. At
the price of 2.30 USD per GB for data overage, the charge for would be 1.15 USD if the nodes send 1
GB of data that day.
Note that the Enterprise plan's daily allowance is not shared with applications for which you have
chosen the Basic option and unused allowance is not carried over from day-to-day.
Here are some examples of determining distinct node count:

SCENARIO TOTAL DAILY NODE COUNT

1 application is using 3 Azure App Service instances and 4


1 virtual server

3 applications running on 2 VMs, and the Application 2


Insights resources for these applications are in the same
subscription and in the Enterprise plan

4 applications whose Applications Insights resources are 13.33


in the same subscription. Each application runs 2
instances during 16 off-peak hours, and 4 instances
during 8 peak hours.

Cloud services with 1 Worker Role and 1 Web Role, each 4


running 2 instances

5-node Service Fabric Cluster running 50 micro-services, 5


each micro-service running 3 instances

The precise node counting behavior depends on which Application Insights SDK your application is using.
In SDK versions 2.2 and onwards, both the Application Insights Core SDK or Web SDK will report each
application host as a node, for example the computer name for physical server and VM hosts or the
instance name in the case of cloud services. The only exception is applications only using .NET Core
and the Application Insights Core SDK, in which case only one node will be reported for all hosts
because the host name is not available.
For earlier versions of the SDK, the Web SDK will behave just as the newer SDK versions, however the
Core SDK will report only one node regardless of the number of actual application hosts.
Note that if your application is using the SDK to set roleInstance to a custom value, by default that
same value will be used to determine the count of nodes.
If you are using a new SDK version with an app that is run from client machines or mobile devices, it is
possible that the count of nodes might return a number which is very large (from the large number of
client machines or mobile devices).
Multi-step web tests
There's an additional charge for multi-step web tests. This refers to web tests that perform a sequence of actions.
There is no separate charge for 'ping tests' of a single page. Telemetry from both ping tests and multi-step tests
is charged along with other telemetry from your app.

Operations Management Suite subscription entitlement


As recently announced, customers who purchase Microsoft Operations Management Suite E1 and E2 are able to
get Application Insights Enterprise as an additional component at no additional cost. Specifically, each unit of
Operations Management Suite E1 and E2 includes an entitlement to 1 node of the Enterprise plan of Application
Insights. As noted above, each Application Insights node includes up to 200 MB of data ingested per day
(separate from Log Analytics data ingestion), with 90-day data retention at no additional cost.

NOTE
To ensure that you get this entitlement, you must have your Application Insights resources in the Enterprise pricing plan.
This entitlement applies only as nodes, so Application Insights resources in the Basic plan will not realize any benefit. Note
that this entitlement will not be visible on the estimated costs shown on the Features + pricing blade.

Review pricing plans and estimate costs


Application Insights makes it easy to understand the pricing plans available and what the costs are likely be be
based on recent usage patterns. Start by opening the Features + Pricing blade in the Application Insights
resource in the Azure portal:

a. Review your data volume for the month. This includes all the data received and retained (after any sampling
from your server and client apps, and from availability tests.
b. A separate charge is made for multi-step web tests. (This doesn't include simple availability tests, which are
included in the data volume charge.)
c. Enable the Enterprise plan.
d. Click through to data management options to view data volume for the last month, set a daily cap or set
ingestion sampling.
Application Insights charges are added to your Azure bill. You can see details of your Azure bill on the Billing
section of the Azure portal or in the Azure Billing Portal.

Data rate
There are three ways in which the volume you send data is limited:
Sampling: This mechanism can be used reduce the amount of telemetry sent from your server and client
apps, with minimal distortion of metrics. This is the primary tool you have to tune the amount of data. Learn
more about sampling features.
Daily cap: When creating an Application Insights resource from the Azure portal this is set to 100 GB/day.
The default when creating an Application Insights resource from Visual Studio, is small (only 32.3 MB/day)
which is intended only to faciliate testing. In this case it is intended that the user will raise the daily cap before
deploying the app into production. The maximum cap is 1000 GB/day unless you have requested a higher
maximum for a high traffic application. Use care when setting the daily cap, as your intent should be never to
hit the daily cap, because you will then lose data for the remainder of the day and be unable to monitor
your application. To change it, use the Daily volume cap blade, linked from the Data Volume Management
blade (see below). Note that some subscription types have credit which cannot be used for Application
Insights. If the subscription has a spending limit, the daily cap blade will have instructions how to remove it
and enable the daily cap to be raised beyond 32.3 MB/day.
Throttling: This limits the data rate to 32 k events per second, averaged over 1 minute.
What happens if my app exceeds the throttling rate?
The volume of data that your app sends is assessed every minute. If it exceeds the per-second rate averaged
over the minute, the server refuses some requests. The SDK buffers the data and then tries to resend,
spreading a surge out over several minutes. If your app consistently sends data at above the throttling rate,
some data will be dropped. (The [Link], Java, and JavaScript SDKs try to resend in this way; other SDKs
might simply drop throttled data.) If throttling occurs, you'll see a notification warning that this has happened.
How do I know how much data my app is sending?
Open the Data volume management blade to see the Daily data volume chart.
Or in Metrics Explorer, add a new chart and select Data point volume as its metric. Switch on Grouping, and
group by Data type.

To reduce your data rate


Here are some things you can do to reduce your data volume:
Use Sampling. This technology reduces data rate without skewing your metrics, and without disrupting the
ability to navigate between related items in Search. In server apps, it operates automatically.
Limit the number of Ajax calls that can be reported in every page view, or switch off Ajax reporting.
Switch off collection modules you don't need by editing [Link]. For example, you might
decide that performance counters or dependency data are inessential.
Split your telemetry to separate instrumentation keys.
Pre-aggregate metrics. If you have put calls to TrackMetric in your app, you can reduce traffic by using the
overload that accepts your calculation of the average and standard deviation of a batch of measurements. Or
you can use a pre-aggregating package.

Managing the maximum daily data volume


You can use the daily volume cap to limit the data collected, but if the cap is met, it will result in a loss of all
telemetry sent from your application for the remainder of the day. It is not advisable to have your application to
hit the daily cap since you are unable to track the health and performance of your application after it is hit.
Instead, use Sampling to tune the data volume to the level you'd like, and use the daily cap only as a "last resort"
in case your application starts sending much higher volumes of telemetry unexpectedly.
To change the daily cap, in the Configure section of your Application Insights resource, click Data volume
management then Daily Cap.

Sampling
Sampling is a method of reducing the rate at which telemetry is sent to your app, while still retaining the ability
to find related events during diagnostic searches, and still retaining correct event counts.
Sampling is an effective way to reduce charges and stay within your monthly quota. The sampling algorithm
retains related items of telemetry, so that, for example, when you use Search, you can find the request related to
a particular exception. The algorithm also retains correct counts, so that you see the correct values in Metric
Explorer for request rates, exception rates, and other counts.
There are several forms of sampling.
Adaptive sampling is the default for the [Link] SDK, which automatically adjusts to the volume of telemetry
that your app sends. It operates automatically in the SDK in your web app, so that the telemetry traffic on the
network is reduced.
Ingestion sampling is an alternative that operates at the point where telemetry from your app enters the
Application Insights service. It doesn't affect the volume of telemetry sent from your app, but it reduces the
volume retained by the service. You can use it to reduce the quota used up by telemetry from browsers and
other SDKs.
To set ingestion sampling, set the control in the Pricing blade:

WARNING
The Data sampling blade only controls the value of ingestion sampling. It doesn't reflect the sampling rate that is being
applied by the Application Insights SDK in your app. If the incoming telemetry has already been sampled at the SDK,
ingestion sampling is not applied.

To discover the actual sampling rate no matter where it has been applied, use an Analytics query such as this:

requests | where timestamp > ago(1d)


| summarize 100/avg(itemCount) by bin(timestamp, 1h)
| render areachart

In each retained record, itemCount indicates the number of original records that it represents, equal to 1 + the
number of previous discarded records.

Automation
You can write a script to set the price plan, using Azure Resource Management. Learn how.

Limits summary
There are some limits on the number of metrics and events per application (that is, per instrumentation key).
Limits depend on the pricing plan that you choose.

RESOURCE DEFAULT LIMIT NOTE


RESOURCE DEFAULT LIMIT NOTE

Total data per day 100 GB You can reduce data by setting a cap. If
you need more, you can increase the
limit up to 1,000 GB from the portal.
For capacities greater than 1,000 GB,
send mail to
AIDataCap@[Link].

Free data per month 1 GB Additional data is charged per


(Basic price plan) gigabyte.

Throttling 32 k events/second The limit is measured over a minute.

Data retention 90 days This resource is for Search, Analytics,


and Metrics Explorer.

Availability multi-step test detailed 90 days This resource provides detailed results
results retention of each step.

Maximum event size 64 K

Property and metric name length 150 See type schemas

Property value string length 8,192 See type schemas

Trace and exception message length 10 k See type schemas

Availability tests count per app 100

Profiler data retention 5 days

Profiler data sent per day 10GB

For more information, see About pricing and quotas in Application Insights.

Next steps
Sampling
Application Performance Monitoring using
Application Insights for SCOM
11/1/2017 • 3 min to read • Edit Online

If you use System Center Operations Manager (SCOM) to manage your servers, you can monitor performance and
diagnose performance issues with the help of Azure Application Insights. Application Insights monitors your web
application's incoming requests, outgoing REST and SQL calls, exceptions, and log traces. It provides dashboards
with metric charts and smart alerts, as well as powerful diagnostic search and analytical queries over this telemetry.
You can switch on Application Insights monitoring by using an SCOM management pack.

Before you start


We assume:
You're familiar with SCOM, and that you use SCOM 2012 R2 or 2016 to manage your IIS web servers.
You have already installed on your servers a web application that you want to monitor with Application Insights.
App framework version is .NET 4.5 or later.
You have access to a subscription in Microsoft Azure and can sign in to the Azure portal. Your organization may
have a subscription, and can add your Microsoft account to it.
(The development team might build the Application Insights SDK into the web app. This build-time instrumentation
gives them greater flexibility in writing custom telemetry. However, it doesn't matter: you can follow the steps
described here either with or without the SDK built in.)

(One time) Install Application Insights management pack


On the machine where you run Operations Manager:
1. Uninstall any old version of the management pack:
a. In Operations Manager, open Administration, Management Packs.
b. Delete the old version.
2. Download and install the management pack from the catalog.
3. Restart Operations Manager.

Create a management pack


1. In Operations Manager, open Authoring, .NET...with Application Insights, Add Monitoring Wizard, and
again choose .NET...with Application Insights.
2. Name the configuration after your app. (You have to instrument one app at a time.)

3. On the same wizard page, either create a new management pack, or select a pack that you created for
Application Insights earlier.
(The Application Insights management pack is a template, from which you create an instance. You can reuse
the same instance later.)
4. Choose one app that you want to monitor. The search feature searches among apps installed on your
servers.

The optional Monitoring scope field can be used to specify a subset of your servers, if you don't want to
monitor the app in all servers.
5. On the next wizard page, you must first provide your credentials to sign in to Microsoft Azure.
On this page, you choose the Application Insights resource where you want the telemetry data to be
analyzed and displayed.
If the application was configured for Application Insights during development, select its existing resource.
Otherwise, create a new resource named for the app. If there are other apps that are components of
the same system, put them in the same resource group, to make access to the telemetry easier to
manage.
You can change these settings later.

6. Complete the wizard.

Repeat this procedure for each app that you want to monitor.
If you need to change settings later, re-open the properties of the monitor from the Authoring window.
Verify monitoring
The monitor that you have installed searches for your app on every server. Where it finds the app, it configures
Application Insights Status Monitor to monitor the app. If necessary, it first installs Status Monitor on the server.
You can verify which instances of the app it has found:

View telemetry in Application Insights


In the Azure portal, browse to the resource for your app. You see charts showing telemetry from your app. (If it
hasn't shown up on the main page yet, click Live Metrics Stream.)

Next steps
Set up a dashboard to bring together the most important charts monitoring this and other apps.
Learn about metrics
Set up alerts
Diagnosing performance issues
Powerful Analytics queries
Availability web tests
Export telemetry from Application Insights
11/1/2017 • 6 min to read • Edit Online

Want to keep your telemetry for longer than the standard retention period? Or process it in some specialized
way? Continuous Export is ideal for this. The events you see in the Application Insights portal can be exported
to storage in Microsoft Azure in JSON format. From there you can download your data and write whatever
code you need to process it.
Using Continuous Export may incur an additional charge. Check your pricing model.
Before you set up continuous export, there are some alternatives you might want to consider:
The Export button at the top of a metrics or search blade lets you transfer tables and charts to an Excel
spreadsheet.
Analytics provides a powerful query language for telemetry. It can also export results.
If you're looking to explore your data in Power BI, you can do that without using Continuous Export.
The Data access REST API lets you access your telemetry programmatically.
After Continuous Export copies your data to storage (where it can stay for as long as you like), it's still available
in Application Insights for the usual retention period.

Create a Continuous Export


1. In the Application Insights resource for your app, open Continuous Export and choose Add:

2. Choose the telemetry data types you want to export.


3. Create or select an Azure storage account where you want to store the data.

WARNING
By default, the storage location will be set to the same geographical region as your Application Insights resource.
If you store in a different region, you may incur transfer charges.
4. Create or select a container in the storage:

Once you've created your export, it starts going. You only get data that arrives after you create the export.
There can be a delay of about an hour before data appears in the storage.
To edit continuous export
If you want to change the event types later, just edit the export:

To stop continuous export


To stop the export, click Disable. When you click Enable again, the export will restart with new data. You won't
get the data that arrived in the portal while export was disabled.
To stop the export permanently, delete it. Doing so doesn't delete your data from storage.
Can't add or change an export?
To add or change exports, you need Owner, Contributor or Application Insights Contributor access rights.
Learn about roles.

What events do you get?


The exported data is the raw telemetry we receive from your application, except that we add location data
which we calculate from the client IP address.
Data that has been discarded by sampling is not included in the exported data.
Other calculated metrics are not included. For example, we don't export average CPU utilisation, but we do
export the raw telemetry from which the average is computed.
The data also includes the results of any availability web tests that you have set up.

NOTE
Sampling. If your application sends a lot of data, the sampling feature may operate and send only a fraction of the
generated telemetry. Learn more about sampling.

Inspect the data


You can inspect the storage directly in the portal. Click Browse, select your storage account, and then open
Containers.
To inspect Azure storage in Visual Studio, open View, Cloud Explorer. (If you don't have that menu command,
you need to install the Azure SDK: Open the New Project dialog, expand Visual C#/Cloud and choose Get
Microsoft Azure SDK for .NET.)
When you open your blob store, you'll see a container with a set of blob files. The URI of each file derived from
your Application Insights resource name, its instrumentation key, telemetry-type/date/time. (The resource
name is all lowercase, and the instrumentation key omits dashes.)

The date and time are UTC and are when the telemetry was deposited in the store - not the time it was
generated. So if you write code to download the data, it can move linearly through the data.
Here's the form of the path:

$"{applicationName}_{instrumentationKey}/{type}/{blobDeliveryTimeUtc:yyyy-MM-dd}/{
blobDeliveryTimeUtc:HH}/{blobId}_{blobCreationTimeUtc:yyyyMMdd_HHmmss}.blob"
Where
blobCreationTimeUtc is time when blob was created in the internal staging storage
blobDeliveryTimeUtc is the time when blob is copied to the export destination storage

Data format
Each blob is a text file that contains multiple '\n'-separated rows. It contains the telemetry processed over a
time period of roughly half a minute.
Each row represents a telemetry data point such as a request or page view.
Each row is an unformatted JSON document. If you want to sit and stare at it, open it in Visual Studio and
choose Edit, Advanced, Format File:

Time durations are in ticks, where 10 000 ticks = 1ms. For example, these values show a time of 1ms to send a
request from the browser, 3ms to receive it, and 1.8s to process the page in the browser:

"sendRequest": {"value": 10000.0},


"receiveRequest": {"value": 30000.0},
"clientProcess": {"value": 17970000.0}

Detailed data model reference for the property types and values.

Processing the data


On a small scale, you can write some code to pull apart your data, read it into a spreadsheet, and so on. For
example:

private IEnumerable<T> DeserializeMany<T>(string folderName)


{
var files = [Link](folderName, "*.blob", [Link]);
foreach (var file in files)
{
using (var fileReader = [Link](file))
{
string fileContent = [Link]();
IEnumerable<string> entities = [Link]('\n').Where(s => ![Link](s));
foreach (var entity in entities)
{
yield return [Link]<T>(entity);
}
}
}
}

For a larger code sample, see using a worker role.


Delete your old data
Please note that you are responsible for managing your storage capacity and deleting the old data if necessary.

If you regenerate your storage key...


If you change the key to your storage, continuous export will stop working. You'll see a notification in your
Azure account.
Open the Continuous Export blade and edit your export. Edit the Export Destination, but just leave the same
storage selected. Click OK to confirm.

The continuous export will restart.

Export samples
Export to SQL using Stream Analytics
Stream Analytics sample 2
On larger scales, consider HDInsight - Hadoop clusters in the cloud. HDInsight provides a variety of
technologies for managing and analyzing big data, and you could use it to process data that has been exported
from Application Insights.

Q&A
But all I want is a one-time download of a chart.
Yes, you can do that. At the top of the blade, click Export Data.
I set up an export, but there's no data in my store.
Did Application Insights receive any telemetry from your app since you set up the export? You'll only
receive new data.
I tried to set up an export, but was denied access
If the account is owned by your organization, you have to be a member of the owners or contributors
groups.
Can I export straight to my own on-premises store?
No, sorry. Our export engine currently only works with Azure storage at this time.
Is there any limit to the amount of data you put in my store?
No. We'll keep pushing data in until you delete the export. We'll stop if we hit the outer limits for blob
storage, but that's pretty huge. It's up to you to control how much storage you use.
How many blobs should I see in the storage?
For every data type you selected to export, a new blob is created every minute (if data is available).
In addition, for applications with high traffic, additional partition units are allocated. In this case each
unit creates a blob every minute.
I regenerated the key to my storage or changed the name of the container, and now the export doesn't
work.
Edit the export and open the export destination blade. Leave the same storage selected as before, and
click OK to confirm. Export will restart. If the change was within the past few days, you won't lose data.
Can I pause the export?
Yes. Click Disable.

Code samples
Stream Analytics sample
Export to SQL using Stream Analytics
Detailed data model reference for the property types and values.
Walkthrough: Export to SQL from Application
Insights using Stream Analytics
1/5/2018 • 6 min to read • Edit Online

This article shows how to move your telemetry data from Azure Application Insights into an Azure SQL database
by using Continuous Export and Azure Stream Analytics.
Continuous export moves your telemetry data into Azure Storage in JSON format. We'll parse the JSON objects
using Azure Stream Analytics and create rows in a database table.
(More generally, Continuous Export is the way to do your own analysis of the telemetry your apps send to
Application Insights. You could adapt this code sample to do other things with the exported telemetry, such as
aggregation of data.)
We'll start with the assumption that you already have the app you want to monitor.
In this example, we will be using the page view data, but the same pattern can easily be extended to other data
types such as custom events and exceptions.

Add Application Insights to your application


To get started:
1. Set up Application Insights for your web pages.
(In this example, we'll focus on processing page view data from the client browsers, but you could also set
up Application Insights for the server side of your Java or [Link] app and process request, dependency
and other server telemetry.)
2. Publish your app, and watch telemetry data appearing in your Application Insights resource.

Create storage in Azure


Continuous export always outputs data to an Azure Storage account, so you need to create the storage first.
1. Create a storage account in your subscription in the Azure portal.
2. Create a container

3. Copy the storage access key


You'll need it soon to set up the input to the stream analytics service.
Start continuous export to Azure storage
1. In the Azure portal, browse to the Application Insights resource you created for your application.

2. Create a continuous export.

Select the storage account you created earlier:


Set the event types you want to see:

3. Let some data accumulate. Sit back and let people use your application for a while. Telemetry will come in
and you'll see statistical charts in metric explorer and individual events in diagnostic search.
And also, the data will export to your storage.
4. Inspect the exported data, either in the portal - choose Browse, select your storage account, and then
Containers - or in Visual Studio. In Visual Studio, choose View / Cloud Explorer, and open Azure /
Storage. (If you don't have this menu option, you need to install the Azure SDK: Open the New Project
dialog and open Visual C# / Cloud / Get Microsoft Azure SDK for .NET.)

Make a note of the common part of the path name, which is derived from the application name and
instrumentation key.
The events are written to blob files in JSON format. Each file may contain one or more events. So we'd like to read
the event data and filter out the fields we want. There are all kinds of things we could do with the data, but our
plan today is to use Stream Analytics to move the data to a SQL database. That will make it easy to run lots of
interesting queries.

Create an Azure SQL Database


Once again starting from your subscription in Azure portal, create the database (and a new server, unless you've
already got one) to which you'll write the data.

Make sure that the database server allows access to Azure services:

Create a table in Azure SQL DB


Connect to the database created in the previous section with your preferred management tool. In this
walkthrough, we will be using SQL Server Management Tools (SSMS).
Create a new query, and execute the following T-SQL:

CREATE TABLE [dbo].[PageViewsTable](


[pageName] [nvarchar](max) NOT NULL,
[viewCount] [int] NOT NULL,
[url] [nvarchar](max) NULL,
[urlDataPort] [int] NULL,
[urlDataprotocol] [nvarchar](50) NULL,
[urlDataHost] [nvarchar](50) NULL,
[urlDataBase] [nvarchar](50) NULL,
[urlDataHashTag] [nvarchar](max) NULL,
[eventTime] [datetime] NOT NULL,
[isSynthetic] [nvarchar](50) NULL,
[deviceId] [nvarchar](50) NULL,
[deviceType] [nvarchar](50) NULL,
[os] [nvarchar](50) NULL,
[osVersion] [nvarchar](50) NULL,
[locale] [nvarchar](50) NULL,
[userAgent] [nvarchar](max) NULL,
[browser] [nvarchar](50) NULL,
[browserVersion] [nvarchar](50) NULL,
[screenResolution] [nvarchar](50) NULL,
[sessionId] [nvarchar](max) NULL,
[sessionIsFirst] [nvarchar](50) NULL,
[clientIp] [nvarchar](50) NULL,
[continent] [nvarchar](50) NULL,
[country] [nvarchar](50) NULL,
[province] [nvarchar](50) NULL,
[city] [nvarchar](50) NULL
)

CREATE CLUSTERED INDEX [pvTblIdx] ON [dbo].[PageViewsTable]


(
[eventTime] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE =
OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
In this sample, we are using data from page views. To see the other data available, inspect your JSON output, and
see the export data model.

Create an Azure Stream Analytics instance


From the Azure portal, select the Azure Stream Analytics service, and create a new Stream Analytics job:
When the new job is created, select Go to resource.

Add a new input


Set it to take input from your Continuous Export blob:
Now you'll need the Primary Access Key from your Storage Account, which you noted earlier. Set this as the
Storage Account Key.
Set path prefix pattern
Be sure to set the Date Format to YYYY-MM-DD (with dashes).
The Path Prefix Pattern specifies how Stream Analytics finds the input files in the storage. You need to set it to
correspond to how Continuous Export stores the data. Set it like this:

webapplication27_12345678123412341234123456789abcdef0/PageViews/{date}/{time}

In this example:
webapplication27 is the name of the Application Insights resource, all in lower case.
1234... is the instrumentation key of the Application Insights resource with dashes removed.
PageViews is the type of data we want to analyze. The available types depend on the filter you set in
Continuous Export. Examine the exported data to see the other available types, and see the export data model.
/{date}/{time} is a pattern written literally.

To get the name and iKey of your Application Insights resource, open Essentials on its overview page, or open
Settings.

TIP
Use the Sample function to check that you have set the input path correctly. If it fails: Check that there is data in the
storage for the sample time range you chose. Edit the input definition and check you set the storage account, path prefix
and date format correctly.

Set query
Open the query section:

Replace the default query with:

SELECT [Link] as pageName


, [Link] as viewCount
, [Link] as url
, [Link] as urlDataPort
, [Link] as urlDataprotocol
, [Link] as urlDataHost
, [Link] as urlDataBase
, [Link] as urlDataHashTag
,[Link] as eventTime
,[Link] as isSynthetic
,[Link] as deviceId
,[Link] as deviceType
,[Link] as os
,[Link] as osVersion
,[Link] as locale
,[Link] as userAgent
,[Link] as browser
,[Link] as browserVersion
,[Link] as screenResolution
,[Link] as sessionId
,[Link] as sessionIsFirst
,[Link] as clientIp
,[Link] as continent
,[Link] as country
,[Link] as province
,[Link] as city
INTO
AIOutput
FROM AIinput A
CROSS APPLY GetElements(A.[view]) as flat

Notice that the first few properties are specific to page view data. Exports of other telemetry types will have
different properties. See the detailed data model reference for the property types and values.

Set up output to database


Select SQL as the output.
Specify the SQL database.

Close the wizard and wait for a notification that the output has been set up.

Start processing
Start the job from the action bar:
You can choose whether to start processing the data starting from now, or to start with earlier data. The latter is
useful if you have had Continuous Export already running for a while.
After a few minutes, go back to SQL Server Management Tools and watch the data flowing in. For example, use a
query like this:

SELECT TOP 100 *


FROM [dbo].[PageViewsTable]

Related articles
Export to PowerBI using Stream Analytics
Detailed data model reference for the property types and values.
Continuous Export in Application Insights
Application Insights
Use Stream Analytics to process exported data from
Application Insights
1/4/2018 • 5 min to read • Edit Online

Azure Stream Analytics is the ideal tool for processing data exported from Application Insights. Stream Analytics
can pull data from a variety of sources. It can transform and filter the data, and then route it to a variety of sinks.
In this example, we'll create an adaptor that takes data from Application Insights, renames and processes some of
the fields, and pipes it into Power BI.

WARNING
There are much better and easier recommended ways to display Application Insights data in Power BI. The path illustrated
here is just an example to illustrate how to process exported data.

Create storage in Azure


Continuous export always outputs data to an Azure Storage account, so you need to create the storage first.
1. Create a "classic" storage account in your subscription in the Azure portal.
2. Create a container

3. Copy the storage access key


You'll need it soon to set up the input to the stream analytics service.
Start continuous export to Azure storage
Continuous export moves data from Application Insights into Azure storage.
1. In the Azure portal, browse to the Application Insights resource you created for your application.

2. Create a continuous export.

Select the storage account you created earlier:

Set the event types you want to see:


3. Let some data accumulate. Sit back and let people use your application for a while. Telemetry will come in
and you'll see statistical charts in metric explorer and individual events in diagnostic search.
And also, the data will export to your storage.
4. Inspect the exported data. In Visual Studio, choose View / Cloud Explorer, and open Azure / Storage. (If
you don't have this menu option, you need to install the Azure SDK: Open the New Project dialog and open
Visual C# / Cloud / Get Microsoft Azure SDK for .NET.)

Make a note of the common part of the path name, which is derived from the application name and
instrumentation key.
The events are written to blob files in JSON format. Each file may contain one or more events. So we'd like to read
the event data and filter out the fields we want. There are all kinds of things we could do with the data, but our plan
today is to use Stream Analytics to pipe the data to Power BI.

Create an Azure Stream Analytics instance


From the Azure portal, select the Azure Stream Analytics service, and create a new Stream Analytics job:
When the new job is created, select Go to resource.

Add a new input


Set it to take input from your Continuous Export blob:
Now you'll need the Primary Access Key from your Storage Account, which you noted earlier. Set this as the
Storage Account Key.
Set path prefix pattern
Be sure to set the Date Format to YYYY-MM-DD (with dashes).
The Path Prefix Pattern specifies where Stream Analytics finds the input files in the storage. You need to set it to
correspond to how Continuous Export stores the data. Set it like this:

webapplication27_12345678123412341234123456789abcdef0/PageViews/{date}/{time}

In this example:
webapplication27 is the name of the Application Insights resource all lower case.
1234... is the instrumentation key of the Application Insights resource, omitting dashes.
PageViews is the type of data you want to analyze. The available types depend on the filter you set in
Continuous Export. Examine the exported data to see the other available types, and see the export data model.
/{date}/{time} is a pattern written literally.
NOTE
Inspect the storage to make sure you get the path right.

Add new output


Now select your job > Outputs > Add.

Provide your work or school account to authorize Stream Analytics to access your Power BI resource. Then
invent a name for the output, and for the target Power BI dataset and table.

Set the query


The query governs the translation from input to output.
Use the Test function to check that you get the right output. Give it the sample data that you took from the inputs
page.
Query to display counts of events
Paste this query:

SELECT
[Link],
count(*)
INTO
[pbi-output]
FROM
[export-input] A
OUTER APPLY GetElements(A.[event]) as flat
GROUP BY TumblingWindow(minute, 1), [Link]

export-input is the alias we gave to the stream input


pbi-output is the output alias we defined
We use OUTER APPLY GetElements because the event name is in a nested JSON array. Then the Select picks the
event name, together with a count of the number of instances with that name in the time period. The Group By
clause groups the elements into time periods of one minute.
Query to display metric values

SELECT
[Link],
avg(CASE WHEN [Link] IS NULL THEN 0 ELSE [Link] END) as
myValue
INTO
[pbi-output]
FROM
[export-input] A
OUTER APPLY GetElements([Link]) as flat
GROUP BY TumblingWindow(minute, 1), [Link]

This query drills into the metrics telemetry to get the event time and the metric value. The metric values are
inside an array, so we use the OUTER APPLY GetElements pattern to extract the rows. "myMetric" is the name of
the metric in this case.
Query to include values of dimension properties

WITH flat AS (
SELECT
[Link] as eventTime,
InstanceId = [Link],
BusinessUnitId = [Link]
FROM MySource
OUTER APPLY GetArrayElements([Link]) MyDimension
)
SELECT
eventTime,
InstanceId,
BusinessUnitId
INTO AIOutput
FROM flat

This query includes values of the dimension properties without depending on a particular dimension being at a
fixed index in the dimension array.

Run the job


You can select a date in the past to start the job from.

Wait until the job is Running.

See results in Power BI


WARNING
There are much better and easier recommended ways to display Application Insights data in Power BI. The path illustrated
here is just an example to illustrate how to process exported data.

Open Power BI with your work or school account, and select the dataset and table that you defined as the output of
the Stream Analytics job.

Now you can use this dataset in reports and dashboards in Power BI.
No data?
Check that you set the date format correctly to YYYY-MM-DD (with dashes).

Video
Noam Ben Zeev shows how to process exported data using Stream Analytics.

Next steps
Continuous export
Detailed data model reference for the property types and values.
Application Insights
Application Insights Export Data Model
11/1/2017 • 7 min to read • Edit Online

This table lists the properties of telemetry sent from the Application Insights SDKs to the portal. You'll see these
properties in data output from Continuous Export. They also appear in property filters in Metric Explorer and
Diagnostic Search.
Points to note:
[0] in these tables denotes a point in the path where you have to insert an index; but it isn't always 0.
Time durations are in tenths of a microsecond, so 10000000 == 1 second.
Dates and times are UTC, and are given in the ISO format yyyy-MM-DDThh:mm:[Link]

Example
// A server report about an HTTP request
{
"request": [
{
"urlData": { // derived from 'url'
"host": "[Link]",
"base": "/",
"hashTag": ""
},
"responseCode": 200, // Sent to client
"success": true, // Default == responseCode<400
// Request id becomes the operation id of child events
"id": "fCOhCdCnZ9I=",
"name": "GET Home/Index",
"count": 1, // 100% / sampling rate
"durationMetric": {
"value": 1046804.0, // 10000000 == 1 second
// Currently the following fields are redundant:
"count": 1.0,
"min": 1046804.0,
"max": 1046804.0,
"stdDev": 0.0,
"sampledValue": 1046804.0
},
"url": "/"
}
],
"internal": {
"data": {
"id": "7f156650-ef4c-11e5-8453-3f984b167d05",
"documentVersion": "1.61"
}
},
"context": {
"device": { // client browser
"type": "PC",
"screenResolution": { },
"roleInstance": "[Link]"
},
"application": { },
"location": { // derived from client ip
"continent": "North America",
"country": "United States",
// last octagon is anonymized to 0 at portal:
"clientip": "[Link]",
"clientip": "[Link]",
"province": "",
"city": ""
},
"data": {
"isSynthetic": true, // we identified source as a bot
// percentage of generated data sent to portal:
"samplingRate": 100.0,
"eventTime": "2016-03-21T[Link].7334717Z" // UTC
},
"user": {
"isAuthenticated": false,
"anonId": "us-tx-sn1-azr", // bot agent id
"anonAcquisitionDate": "0001-01-01T[Link]Z",
"authAcquisitionDate": "0001-01-01T[Link]Z",
"accountAcquisitionDate": "0001-01-01T[Link]Z"
},
"operation": {
"id": "fCOhCdCnZ9I=",
"parentId": "fCOhCdCnZ9I=",
"name": "GET Home/Index"
},
"cloud": { },
"serverDevice": { },
"custom": { // set by custom fields of track calls
"dimensions": [ ],
"metrics": [ ]
},
"session": {
"id": "65504c10-44a6-489e-b9dc-94184eb00d86",
"isFirst": true
}
}

Context
All types of telemetry are accompanied by a context section. Not all of these fields are transmitted with every
data point.

PATH TYPE NOTES

[Link] [0] object [ ] Key-value string pairs set by custom


properties parameter. Key max length
100, values max length 1024. More
than 100 unique values, the property
can be searched but cannot be used
for segmentation. Max 200 keys per
ikey.

[Link] [0] object [ ] Key-value pairs set by custom


measurements parameter and by
TrackMetrics. Key max length 100,
values may be numeric.

[Link] string UTC

[Link] boolean Request appears to come from a bot or


web test.
PATH TYPE NOTES

[Link] number Percentage of telemetry generated by


the SDK that is sent to portal. Range
0.0-100.0.

[Link] object Client device

[Link] string IE, Chrome, ...

[Link] string Chrome 48.0, ...

[Link] string

[Link] string

[Link] string

[Link] string en-GB, de-DE, ...

[Link] string

[Link] string

[Link] string Host OS

[Link] string ID of server host

[Link] string

[Link] string PC, Browser, ...

[Link] object Derived from clientip.

[Link] string Derived from clientip, if known

[Link] string Last octagon is anonymized to 0.

[Link] string

[Link] string

[Link] string State or province

[Link] string Items that have the same operation id


are shown as Related Items in the
portal. Usually the request id.

[Link] string url or request name

[Link] string Allows nested related items.


PATH TYPE NOTES

[Link] string Id of a group of operations from the


same source. A period of 30 minutes
without an operation signals the end of
a session.

[Link] boolean

[Link] string

[Link] string

[Link] string

[Link] string Authenticated User

[Link] boolean

[Link] string

[Link] string

Events
Custom events generated by TrackEvent().

PATH TYPE NOTES

event [0] count integer 100/(sampling rate). For example 4 =>


25%.

event [0] name string Event name. Max length 250.

event [0] url string

event [0] [Link] string

event [0] [Link] string

Exceptions
Reports exceptions in the server and in the browser.

PATH TYPE NOTES

basicException [0] assembly string

basicException [0] count integer 100/(sampling rate). For example 4 =>


25%.

basicException [0] exceptionGroup string


PATH TYPE NOTES

basicException [0] exceptionType string

basicException [0] string


failedUserCodeMethod

basicException [0] string


failedUserCodeAssembly

basicException [0] handledAt string

basicException [0] hasFullStack boolean

basicException [0] id string

basicException [0] method string

basicException [0] message string Exception message. Max length 10k.

basicException [0] string


outerExceptionMessage

basicException [0] string


outerExceptionThrownAtAssembly

basicException [0] string


outerExceptionThrownAtMethod

basicException [0] outerExceptionType string

basicException [0] outerId string

basicException [0] parsedStack [0] string


assembly

basicException [0] parsedStack [0] string


fileName

basicException [0] parsedStack [0] level integer

basicException [0] parsedStack [0] line integer

basicException [0] parsedStack [0] string


method

basicException [0] stack string Max length 10k

basicException [0] typeName string

Trace Messages
Sent by TrackTrace, and by the logging adapters.
PATH TYPE NOTES

message [0] loggerName string

message [0] parameters string

message [0] raw string The log message, max length 10k.

message [0] severityLevel string

Remote dependency
Sent by TrackDependency. Used to report performance and usage of calls to dependencies in the server, and
AJAX calls in the browser.

PATH TYPE NOTES

remoteDependency [0] async boolean

remoteDependency [0] baseName string

remoteDependency [0] string For example "home/index"


commandName

remoteDependency [0] count integer 100/(sampling rate). For example 4 =>


25%.

remoteDependency [0] string HTTP, SQL, ...


dependencyTypeName

remoteDependency [0] number Time from call to completion of


[Link] response by dependency

remoteDependency [0] id string

remoteDependency [0] name string Url. Max length 250.

remoteDependency [0] resultCode string from HTTP dependency

remoteDependency [0] success boolean

remoteDependency [0] type string Http, Sql,...

remoteDependency [0] url string Max length 2000

remoteDependency [0] [Link] string Max length 2000

remoteDependency [0] string


[Link]

remoteDependency [0] [Link] string Max length 200


Requests
Sent by TrackRequest. The standard modules use this to reports server response time, measured at the server.

PATH TYPE NOTES

request [0] count integer 100/(sampling rate). For example: 4 =>


25%.

request [0] [Link] number Time from request arriving to response.


1e7 == 1s

request [0] id string Operation id

request [0] name string GET/POST + url base. Max length 250

request [0] responseCode integer HTTP response sent to client

request [0] success boolean Default == (responseCode < 400)

request [0] url string Not including host

request [0] [Link] string

request [0] [Link] string

request [0] [Link] string

Page View Performance


Sent by the browser. Measures the time to process a page, from user initiating the request to display complete
(excluding async AJAX calls).
Context values show client OS and browser version.

PATH TYPE NOTES

clientPerformance [0] integer Time from end of receiving the HTML


[Link] to displaying the page.

clientPerformance [0] name string

clientPerformance [0] integer Time taken to establish a network


[Link] connection.

clientPerformance [0] integer Time from end of sending the request


[Link] to receiving the HTML in reply.

clientPerformance [0] integer Time from taken to send the HTTP


[Link] request.

clientPerformance [0] [Link] integer Time from starting to send the request
to displaying the page.
PATH TYPE NOTES

clientPerformance [0] url string URL of this request

clientPerformance [0] [Link] string

clientPerformance [0] [Link] string

clientPerformance [0] [Link] string

clientPerformance [0] [Link] string

Page Views
Sent by trackPageView() or stopTrackPage

PATH TYPE NOTES

view [0] count integer 100/(sampling rate). For example 4 =>


25%.

view [0] [Link] integer Value optionally set in trackPageView()


or by startTrackPage() -
stopTrackPage(). Not the same as
clientPerformance values.

view [0] name string Page title. Max length 250

view [0] url string

view [0] [Link] string

view [0] [Link] string

view [0] [Link] string

Availability
Reports availability web tests.

PATH TYPE NOTES

availability [0] [Link] string availability

availability [0] [Link] number 1.0 or 0.0

availability [0] count integer 100/(sampling rate). For example 4 =>


25%.

availability [0] [Link] string

availability [0] [Link] integer


PATH TYPE NOTES

availability [0] [Link] string

availability [0] [Link] number Duration of test. 1e7==1s

availability [0] message string Failure diagnostic

availability [0] result string Pass/Fail

availability [0] runLocation string Geo source of http req

availability [0] testName string

availability [0] testRunId string

availability [0] testTimestamp string

Metrics
Generated by TrackMetric().
The metric value is found in [Link][0]
For example:

{
"metric": [ ],
"context": {
...
"custom": {
"dimensions": [
{ "ProcessId": "4068" }
],
"metrics": [
{
"dispatchRate": {
"value": 0.001295,
"count": 1.0,
"min": 0.001295,
"max": 0.001295,
"stdDev": 0.0,
"sampledValue": 0.001295,
"sum": 0.001295
}
}
} ] }
}

About metric values


Metric values, both in metric reports and elsewhere, are reported with a standard object structure. For example:
"durationMetric": {
"name": "[Link]",
"type": "Aggregation",
"value": 468.71603053650279,
"count": 1.0,
"min": 468.71603053650279,
"max": 468.71603053650279,
"stdDev": 0.0,
"sampledValue": 468.71603053650279
}

Currently - though this might change in the future - in all values reported from the standard SDK modules,
count==1 and only the name and value fields are useful. The only case where they would be different would be
if you write your own TrackMetric calls in which you set the other parameters.
The purpose of the other fields is to allow metrics to be aggregated in the SDK, to reduce traffic to the portal. For
example, you could average several successive readings before sending each metric report. Then you would
calculate the min, max, standard deviation and aggregate value (sum or average) and set count to the number of
readings represented by the report.
In the tables above, we have omitted the rarely-used fields count, min, max, stdDev and sampledValue.
Instead of pre-aggregating metrics, you can use sampling if you need to reduce the volume of telemetry.
Durations
Except where otherwise noted, durations are represented in tenths of a microsecond, so that 10000000.0 means
1 second.

See also
Application Insights
Continuous Export
Code samples
Feed Power BI from Application Insights
11/28/2017 • 4 min to read • Edit Online

Power BI is a suite of business tools that helps you analyze data and share insights. Rich dashboards are
available on every device. You can combine data from many sources, including Analytics queries from Azure
Application Insights.
There are three recommended methods of exporting Application Insights data to Power BI. You can use them
separately or together.
Power BI adapter. Set up a complete dashboard of telemetry from your app. The set of charts is predefined,
but you can add your own queries from any other sources.
Export Analytics queries. Write any query you want and export it to Power BI. You can write your query by
using Analytics, or write them from the Usage Funnels. You can place this query on a dashboard, along with
any other data.
Continuous export and Azure Stream Analytics. This method is useful if you want to keep your data for
long periods. If you don't, use one of the other methods, because this one involves more work to set up.

Power BI adapter
This method creates a complete dashboard of telemetry for you. The initial dataset is predefined, but you can
add more data to it.
Get the adapter
1. Sign in to Power BI.
2. Open Get Data, Services, and then Application Insights.

3. Provide the details of your Application Insights resource.


4. Wait a minute or two for the data to be imported.

You can edit the dashboard, combining the Application Insights charts with those of other sources, and with
Analytics queries. You can get more charts in the visualization gallery, and each chart has parameters you can
set.
After the initial import, the dashboard and the reports continue to update daily. You can control the refresh
schedule on the dataset.

Export Analytics queries


This route allows you to write any Analytics query you like, or export from Usage Funnels, and then export that
to a Power BI dashboard. (You can add to the dashboard created by the adapter.)
One time: install Power BI Desktop
To import your Application Insights query, you use the desktop version of Power BI. Then you can publish it to
the web or to your Power BI cloud workspace.
Install Power BI Desktop.
Export an Analytics query
1. Open Analytics and write your query.
2. Test and refine the query until you're happy with the results. Make sure that the query runs correctly in
Analytics before you export it.
3. On the Export menu, choose Power BI (M). Save the text file.
3. On the Export menu, choose Power BI (M). Save the text file.

4. In Power BI Desktop, select Get Data > Blank Query. Then, in the query editor, under View, select
Advanced Editor.
Paste the exported M Language script into the Advanced Editor.

5. To allow Power BI to access Azure, you might have to provide credentials. Use Organizational account
to sign in with your Microsoft account.
If you need to verify the credentials, use the Data Source Settings menu command in the query editor.
Be sure to specify the credentials you use for Azure, which might be different from your credentials for
Power BI.
6. Choose a visualization for your query, and select the fields for x-axis, y-axis, and segmenting dimension.

7. Publish your report to your Power BI cloud workspace. From there, you can embed a synchronized
version into other web pages.
8. Refresh the report manually at intervals, or set up a scheduled refresh on the options page.
Export a Funnel
1. Make your Funnel.
2. Select Power BI.

3. In Power BI Desktop, select Get Data > Blank Query. Then, in the query editor, under View, select
Advanced Editor.

Paste the exported M Language script into the Advanced Editor.


4. Select items from the query, and choose a Funnel visualization.

5. Change the title to make it meaningful, and publish your report to your Power BI cloud workspace.

Troubleshooting
You might encounter errors pertaining to credentials or the size of the dataset. Here is some information about
what to do about these errors.
Unauthorized (401 or 403)
This can happen if your refresh token has not been updated. Try these steps to ensure you still have access:
1. Sign into the Azure portal, and make sure you can access the resource.
2. Try to refresh the credentials for the dashboard.
If you do have access and refreshing the credentials does not work, please open a support ticket.
Bad Gateway (502)
This is usually caused by an Analytics query that returns too much data. Try using a smaller time range for the
query.
If reducing the dataset coming from the Analytics query doesn't meet your requirements, consider using the API
to pull a larger dataset. Here's how to convert the M-Query export to use the API.
1. Create an API key.
2. Update the Power BI M script that you exported from Analytics by replacing the Azure Resource Manager
URL with the Application Insights API.
Replace [Link]
with, [Link]
3. Finally, update the credentials to basic, and use your API key.
Existing script

Source = [Link]([Link]("[Link]
xxxxxxxxxxxx/resourcegroups//providers/[Link]/components//api/query?api-version=2014-12-01-
preview",[Query=[#"csl"="requests",#"x-ms-app"="AAPBI"],Timeout=#duration(0,0,4,0)]))

Updated script

Source = [Link]([Link]("[Link]
api-version=2014-12-01-preview",[Query=[#"csl"="requests",#"x-ms-app"="AAPBI"],Timeout=#duration(0,0,4,0)]))

About sampling
If your application sends a lot of data, you might want to use the adaptive sampling feature, which sends only a
percentage of your telemetry. The same is true if you have manually set sampling either in the SDK or on
ingestion. Learn more about sampling.

Next steps
Power BI - Learn
Analytics tutorial
Data collection, retention and storage in Application
Insights
11/1/2017 • 10 min to read • Edit Online

When you install Azure Application Insights SDK in your app, it sends telemetry about your app to the Cloud.
Naturally, responsible developers want to know exactly what data is sent, what happens to the data, and how they
can keep control of it. In particular, could sensitive data be sent, where is it stored, and how secure is it?
First, the short answer:
The standard telemetry modules that run "out of the box" are unlikely to send sensitive data to the service. The
telemetry is concerned with load, performance and usage metrics, exception reports, and other diagnostic
data. The main user data visible in the diagnostic reports are URLs; but your app shouldn't in any case put
sensitive data in plain text in a URL.
You can write code that sends additional custom telemetry to help you with diagnostics and monitoring usage.
(This extensibility is a great feature of Application Insights.) It would be possible, by mistake, to write this code
so that it includes personal and other sensitive data. If your application works with such data, you should apply
a thorough review processes to all the code you write.
While developing and testing your app, it's easy to inspect what's being sent by the SDK. The data appears in
the debugging output windows of the IDE and browser.
The data is held in Microsoft Azure servers in the USA or Europe. (But your app can run anywhere.) Azure has
strong security processes and meets a broad range of compliance standards. Only you and your designated
team have access to your data. Microsoft staff can have restricted access to it only under specific limited
circumstances with your knowledge. It's encrypted in transit, though not in the servers.
The rest of this article elaborates more fully on these answers. It's designed to be self-contained, so that you can
show it to colleagues who aren't part of your immediate team.

What is Application Insights?


Azure Application Insights is a service provided by Microsoft that helps you improve the performance and
usability of your live application. It monitors your application all the time it's running, both during testing and
after you've published or deployed it. Application Insights creates charts and tables that show you, for example,
what times of day you get most users, how responsive the app is, and how well it is served by any external
services that it depends on. If there are crashes, failures or performance issues, you can search through the
telemetry data in detail to diagnose the cause. And the service will send you emails if there are any changes in the
availability and performance of your app.
In order to get this functionality, you install an Application Insights SDK in your application, which becomes part
of its code. When your app is running, the SDK monitors its operation and sends telemetry to the Application
Insights service. This is a cloud service hosted by Microsoft Azure. (But Application Insights works for any
applications, not just those that are hosted in Azure.)

The Application Insights service stores and analyzes the telemetry. To see the analysis or search through the
stored telemetry, you sign in to your Azure account and open the Application Insights resource for your
application. You can also share access to the data with other members of your team, or with specified Azure
subscribers.
You can have data exported from the Application Insights service, for example to a database or to external tools.
You provide each tool with a special key that you obtain from the service. The key can be revoked if necessary.
Application Insights SDKs are available for a range of application types: web services hosted in your own J2EE or
[Link] servers, or in Azure; web clients - that is, the code running in a web page; desktop apps and services;
device apps such as Windows Phone, iOS, and Android. They all send telemetry to the same service.

What data does it collect?


How is the data is collected?
There are three sources of data:
The SDK, which you integrate with your app either in development or at run time. There are different SDKs
for different application types. There's also an SDK for web pages, which loads into the end-user's browser
along with the page.
Each SDK has a number of modules, which use different techniques to collect different types of
telemetry.
If you install the SDK in development, you can use its API to send your own telemetry, in addition to the
standard modules. This custom telemetry can include any data you want to send.
In some web servers, there are also agents that run alongside the app and send telemetry about CPU, memory,
and network occupancy. For example, Azure VMs, Docker hosts, and J2EE servers can have such agents.
Availability tests are processes run by Microsoft that send requests to your web app at regular intervals. The
results are sent to the Application Insights service.
What kinds of data are collected?
The main categories are:
Web server telemetry - HTTP requests. Uri, time taken to process the request, response code, client IP address.
Session id.
Web pages - Page, user and session counts. Page load times. Exceptions. Ajax calls.
Performance counters - Memory, CPU, IO, Network occupancy.
Client and server context - OS, locale, device type, browser, screen resolution.
Exceptions and crashes - stack dumps, build id, CPU type.
Dependencies - calls to external services such as REST, SQL, AJAX. URI or connection string, duration, success,
command.
Availability tests - duration of test and steps, responses.
Trace logs and custom telemetry - anything you code into your logs or telemetry.
More detail.

How can I verify what's being collected?


If you're developing the app using Visual Studio, run the app in debug mode (F5). The telemetry appears in the
Output window. From there, you can copy it and format it as JSON for easy inspection.
There's also a more readable view in the Diagnostics window.
For web pages, open your browser's debugging window.

Can I write code to filter the telemetry before it is sent?


This would be possible by writing a telemetry processor plugin.

How long is the data kept?


Raw data points (that is, items that you can query in Analytics and inspect in Search) are kept for up to 90 days. If
you need to keep data longer than that, you can use continuous export to copy it to a storage account.
Aggregated data (that is, counts, averages and other statistical data that you see in Metric Explorer) are retained at
a grain of 1 minute for 90 days.

Who can access the data?


The data is visible to you and, if you have an organization account, your team members.
It can be exported by you and your team members and could be copied to other locations and passed on to other
people.
What does Microsoft do with the information my app sends to Application Insights?
Microsoft uses the data only in order to provide the service to you.

Where is the data held?


In the USA or Europe. You can select the location when you create a new Application Insights resource.
Does that mean my app has to be hosted in the USA or Europe?
No. Your application can run anywhere, either in your own on-premises hosts or in the Cloud.

How secure is my data?


Application Insights is an Azure Service. Security policies are described in the Azure Security, Privacy, and
Compliance white paper.
The data is stored in Microsoft Azure servers. For accounts in the Azure Portal, account restrictions are described
in the Azure Security, Privacy, and Compliance document.
Access to your data by Microsoft personnel is restricted. We access your data only with your permission and if it
is necessary to support your use of Application Insights.
Data in aggregate across all our customers' applications (such as data rates and average size of traces) is used to
improve Application Insights.
Could someone else's telemetry interfere with my Application Insights data?
They could send additional telemetry to your account by using the instrumentation key, which can be found in the
code of your web pages. With enough additional data, your metrics would not correctly represent your app's
performance and usage.
If you share code with other projects, remember to remove your instrumentation key.

Is the data encrypted?


Not inside the servers at present.
All data is encrypted as it moves between data centers.
Is the data encrypted in transit from my application to Application Insights servers?
Yes, we use https to send data to the portal from nearly all SDKs, including web servers, devices and HTTPS web
pages. The only exception is data sent from plain HTTP web pages.

Personally Identifiable Information


Could Personally Identifiable Information (PII) be sent to Application Insights?
Yes, it's possible.
As general guidance:
Most standard telemetry (that is, telemetry sent without you writing any code) does not include explicit PII.
However, it might be possible to identify individuals by inference from a collection of events.
Exception and trace messages could contain PII
Custom telemetry - that is, calls such as TrackEvent that you write in code using the API or log traces - can
contain any data you choose.
The table at the end of this document contains more detailed descriptions of the data collected.
Am I responsible for complying with laws and regulations in regard to PII?
Yes. It is your responsibility to ensure that the collection and use of the data complies with laws and regulations,
and with the Microsoft Online Services Terms.
You should inform your customers appropriately about the data your application collects and how the data is
used.
Can my users turn off Application Insights?
Not directly. We don't provide a switch that your users can operate to turn off Application Insights.
However, you can implement such a feature in your application. All the SDKs include an API setting that turns off
telemetry collection.
My application is unintentionally collecting sensitive information. Can Application Insights scrub this data so it isn't retained?
Application Insights does not filter or delete your data. You should manage the data appropriately and avoid
sending such data to Application Insights.

Data sent by Application Insights


The SDKs vary between platforms, and there are are several components that you can install. (Refer to Application
Insights - overview.) Each component sends different data.
Classes of data sent in different scenarios

YOUR ACTION DATA CLASSES COLLECTED (SEE NEXT TABLE)

Add Application Insights SDK to a .NET web project ServerContext


Inferred
Perf counters
Requests
Exceptions
Session
users

Install Status Monitor on IIS Dependencies


ServerContext
Inferred
Perf counters

Add Application Insights SDK to a Java web app ServerContext


Inferred
Request
Session
users

Add JavaScript SDK to web page ClientContext


Inferred
Page
ClientPerf
Ajax

Define default properties Properties on all standard and custom events

Call TrackMetric Numeric values


Properties

Call Track* Event name


Properties

Call TrackException Exceptions


Stack dump
Properties

SDK can't collect data. For example: SDK diagnostics


- can't access perf counters
- exception in telemetry initializer
For SDKs for other platforms, see their documents.
The classes of collected data

COLLECTED DATA CLASS INCLUDES (NOT AN EXHAUSTIVE LIST)

Properties Any data - determined by your code

DeviceContext Id, IP, Locale, Device model, network, network type, OEM
name, screen resolution, Role Instance, Role Name, Device
Type

ClientContext OS, locale, language, network, window resolution

Session session id

ServerContext Machine name, locale, OS, device, user session, user context,
operation

Inferred geo location from IP address, timestamp, OS, browser

Metrics Metric name and value

Events Event name and value

PageViews URL and page name or screen name

Client perf URL/page name, browser load time

Ajax HTTP calls from web page to server

Requests URL, duration, response code

Dependencies Type(SQL, HTTP, ...), connection string or URI, sync/async,


duration, success, SQL statement (with Status Monitor)

Exceptions Type, message, call stacks, source file and line number,
thread id

Crashes Process id, parent process id, crash thread id; application
patch, id, build; exception type, address, reason; obfuscated
symbols and registers, binary start and end addresses, binary
name and path, cpu type

Trace Message and severity level

Perf counters Processor time, available memory, request rate, exception


rate, process private bytes, IO rate, request duration, request
queue length

Availability Web test response code, duration of each test step, test
name, timestamp, success, response time, test location

SDK diagnostics Trace message or Exception

You can switch off some of the data by editing [Link]


Credits
This product includes GeoLite2 data created by MaxMind, available from [Link]
Resources, roles, and access control in Application
Insights
12/11/2017 • 2 min to read • Edit Online

You can control who has read and update access to your data in Azure Application Insights, by using Role-based
access control in Microsoft Azure.

IMPORTANT
Assign access to users in the resource group or subscription to which your application resource belongs - not in the
resource itself. Assign the Application Insights component contributor role. This ensures uniform control of access to
web tests and alerts along with your application resource. Learn more.

Resources, groups and subscriptions


First, some definitions:
Resource - An instance of a Microsoft Azure service. Your Application Insights resource collects, analyzes
and displays the telemetry data sent from your application. Other types of Azure resources include web
apps, databases, and VMs.
To see your resources, open the Azure Portal, sign in, and click All Resources. To find a resource, type part
of its name in the filter field.

Resource group - Every resource belongs to one group. A group is a convenient way to manage related
resources, particularly for access control. For example, into one resource group you could put a Web App,
an Application Insights resource to monitor the app, and a Storage resource to keep exported data.
Subscription - To use Application Insights or other Azure resources, you sign in to an Azure subscription.
Every resource group belongs to one Azure subscription, where you choose your price package and, if it's
an organization subscription, choose the members and their access permissions.
Microsoft account - The username and password that you use to sign in to Microsoft Azure subscriptions,
XBox Live, [Link], and other Microsoft services.

Control access in the resource group


It's important to understand that in addition to the resource you created for your application, there are also
separate hidden resources for alerts and web tests. They are attached to the same resource group as your
application. You might also have put other Azure services in there, such as websites or storage.

To control access to these resources it's therefore recommended to:


Control access at the resource group or subscription level.
Assign the Application Insights Component contributor role to users. This allows them to edit web tests,
alerts, and Application Insights resources, without providing access to any other services in the group.

To provide access to another user


You must have Owner rights to the subscription or the resource group.
The user must have a Microsoft Account, or access to their organizational Microsoft Account. You can provide
access to individuals, and also to user groups defined in Azure Active Directory.
Navigate to the resource group
Add the user there.

Or you could go up another level and add the user to the Subscription.
Select a role

ROLE IN THE RESOURCE GROUP

Owner Can change anything, including user access

Contributor Can edit anything, including all resources

Application Insights Component contributor Can edit Application Insights resources, web tests and alerts

Reader Can view but not change anything

'Editing' includes creating, deleting and updating:


Resources
Web tests
Alerts
Continuous export
Select the user
If the user you want isn't in the directory, you can invite anyone with a Microsoft account. (If they use services like
[Link], OneDrive, Windows Phone, or XBox Live, they have a Microsoft account.)

Related content
Role based access control in Azure
IP addresses used by Application Insights and Log
Analytics
1/11/2018 • 2 min to read • Edit Online

The Azure Application Insights service uses a number of IP addresses. You might need to know these addresses if
the app that you are monitoring is hosted behind a firewall.

NOTE
Although these addresses are static, it's possible that we will need to change them from time to time.

Outgoing ports
You need to open some outgoing ports in your server's firewall to allow the Application Insights SDK and/or
Status Monitor to send data to the portal:

PURPOSE URL IP PORTS

Telemetry [Link] [Link] 443


[Link] [Link]
[Link] [Link]
[Link]
[Link]
[Link]

Live Metrics Stream [Link] [Link] 443


[Link] [Link]
[Link]

Internal Telemetry [Link] [Link] 443


[Link]

Status Monitor
Status Monitor Configuration - needed only when making changes.

PURPOSE URL IP PORTS

Configuration [Link] 443

Configuration [Link] 443

Configuration [Link] 443

Configuration [Link] 443

Configuration [Link]- 443


[Link]
PURPOSE URL IP PORTS

Configuration [Link] 443

Configuration [Link] 443

Installation [Link] 443

HockeyApp
PURPOSE URL IP PORTS

Crash data [Link] [Link] 80, 443

Availability tests
This is the list of addresses from which availability web tests are run. If you want to run web tests on your app,
but your web server is restricted to serving specific clients, then you will have to permit incoming traffic from our
availability test servers.
Open ports 80 (http) and 443 (https) for incoming traffic from these addresses (IP addresses are grouped by
location):

AU : Sydney
[Link]
[Link]
[Link]
[Link]
BR : Sao Paulo
[Link]
[Link]
[Link]
[Link]
CH : Zurich
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
FR : Paris
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
HK : Hong Kong
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
IE : Dublin
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
JP : Kawaguchi
[Link]
[Link]
[Link]
[Link]
NL : Amsterdam
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
RU : Moscow
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
SE : Stockholm
[Link]
[Link]
[Link]
[Link]
GB : United Kingdom
[Link]
[Link]
[Link]
[Link]
SG : Singapore
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
US : CA-San Jose
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
US : FL-Miami
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
US : IL-Chicago
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
US : TX-San Antonio
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
US : VA-Ashburn
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]

Application Insights API


PURPOSE URI IP PORTS

API [Link] [Link] 80,443


[Link] [Link]
[Link]
[Link]
[Link]
[Link]

API docs [Link] [Link] 80,443


[Link] [Link]
[Link]
[Link]
[Link]
[Link]
[Link]
[Link]
PURPOSE URI IP PORTS

Internal API [Link] dynamic 443


[Link]
[Link]
[Link]
[Link]
[Link]
[Link]

Log Analytics API


PURPOSE URI IP PORTS

API [Link] dynamic 80,443


*.[Link]

API docs [Link] dynamic 80,443


[Link]
[Link]

Application Insights Analytics


PURPOSE URI IP PORTS

Analytics Portal [Link]. dynamic 80,443


io

CDN [Link] dynamic 80,443


[Link]

Media CDN [Link] dynamic 80,443


[Link]

Note: *.[Link] domain is owned by Application Insights team.

Log Analytics Portal


PURPOSE URI IP PORTS

Portal [Link] dynamic 80,443

CDN [Link] dynamic 80,443


[Link]

Note: *.[Link] domain is owned by the Log Analytics team.

Application Insights Azure Portal Extension


PURPOSE URI IP PORTS

Application Insights [Link] dynamic 80,443


Extension [Link]
PURPOSE URI IP PORTS

Application Insights insightsportal-prod2- dynamic 80,443


Extension CDN [Link]
insightsportal-prod2-asiae-
[Link]
insightsportal-cdn-
[Link]

Application Insights SDKs


PURPOSE URI IP PORTS

Application Insights JS SDK [Link] dynamic 80,443


CDN

Application Insights Java [Link]. dynamic 80,443


SDK net

Profiler
PURPOSE URI IP PORTS

Agent [Link] [Link] 443


t [Link]
*.[Link]. [Link]
net [Link]
[Link]
[Link]
[Link]
[Link]

Portal [Link] dynamic 443


.net

Storage *.[Link] dynamic 443

Snapshot Debugger
PURPOSE URI IP PORTS

Agent [Link] [Link] 443


*.[Link] [Link]
t [Link]
[Link]

Portal [Link] dynamic 443


[Link]

Storage *.[Link] dynamic 443


Troubleshooting no data - Application Insights for
.NET
1/3/2018 • 9 min to read • Edit Online

Some of my telemetry is missing


In Application Insights, I only see a fraction of the events that are being generated by my app.
If you are consistently seeing the same fraction, it's probably due to adaptive sampling. To confirm this, open
Search (from the overview blade) and look at an instance of a Request or other event. At the bottom of the
properties section click "..." to get full property details. If Request Count > 1, then sampling is in operation.
Otherwise, it's possible that you're hitting a data rate limit for your pricing plan. These limits are applied per
minute.

No data from my server


I installed my app on my web server, and now I don't see any telemetry from it. It worked OK on my dev machine.
Probably a firewall issue. Set firewall exceptions for Application Insights to send data.
IIS Server might be missing some prerequisites: .NET Extensibility 4.5, and [Link] 4.5.
I installed Status Monitor on my web server to monitor existing apps. I don't see any results.
See Troubleshooting Status Monitor.

No 'Add Application Insights' option in Visual Studio


When I right-click an existing project in Solution Explorer, I don't see any Application Insights options.
Not all types of .NET project are supported by the tools. Web and WCF projects are supported. For other project
types such as desktop or service applications, you can still add an Application Insights SDK to your project
manually.
Make sure you have Visual Studio 2013 Update 3 or later. It comes pre-installed with Developer Analytics tools,
which provide the Application Insights SDK.
Select Tools, Extensions and Updates and check that Developer Analytics Tools is installed and enabled. If
so, click Updates to see if there's an update available.
Open the New Project dialog and choose [Link] Web application. If you see the Application Insights option
there, then the tools are installed. If not, try uninstalling and then re-installing the Application Insights Tools.

Adding Application Insights failed


When I try to add Application Insights to an existing project, I see an error message.
Likely causes:
Communication with the Application Insights portal failed; or
There is some problem with your Azure account;
You only have read access to the subscription or group where you were trying to create the new resource.
Fix:
Check that you provided sign-in credentials for the right Azure account.
In your browser, check that you have access to the Azure portal. Open Settings and see if there is any restriction.
Add Application Insights to your existing project: In Solution Explorer, right click your project and choose "Add
Application Insights."
If it still isn't working, follow the manual procedure to add a resource in the portal and then add the SDK to your
project.

I get an error "Instrumentation key cannot be empty"


Looks like something went wrong while you were installing Application Insights or maybe a logging adapter.
In Solution Explorer, right-click your project and choose Application Insights > Configure Application Insights.
You'll get a dialog that invites you to sign in to Azure and either create an Application Insights resource, or re-use
an existing one.

"NuGet package(s) are missing" on my build server


Everything builds OK when I'm debugging on my development machine, but I get a NuGet error on the build
server.
Please see NuGet Package Restore and Automatic Package Restore.

Missing menu command to open Application Insights from Visual


Studio
When I right-click my project Solution Explorer, I don't see any Application Insights commands, or I don't see an
Open Application Insights command.
Likely causes:
If you created the Application Insights resource manually, or if the project is of a type that isn't supported by the
Application Insights tools.
The Developer Analytics tools are disabled in your Visual Studio.
Your Visual Studio is older than 2013 Update 3.
Fix:
Make sure your Visual Studio version is 2013 update 3 or later.
Select Tools, Extensions and Updates and check that Developer Analytics tools is installed and enabled. If
so, click Updates to see if there's an update available.
Right-click your project in Solution Explorer. If you see the command Application Insights > Configure
Application Insights, use it to connect your project to the resource in the Application Insights service.
Otherwise, your project type isn't directly supported by the Application Insights tools. To see your telemetry, sign in
to the Azure portal, choose Application Insights on the left navigation bar, and select your application.

'Access denied' on opening Application Insights from Visual Studio


The 'Open Application Insights' menu command takes me to the Azure portal, but I get an 'access denied' error.
The Microsoft sign-in that you last used on your default browser doesn't have access to the resource that was
created when Application Insights was added to this app. There are two likely reasons:
You have more than one Microsoft account - maybe a work and a personal Microsoft account? The sign-in
that you last used on your default browser was for a different account than the one that has access to add
Application Insights to the project.
Fix: Click your name at top right of the browser window, and sign out. Then sign in with the account that
has access. Then on the left navigation bar, click Application Insights and select your app.
Someone else added Application Insights to the project, and they forgot to give you access to the resource
group in which it was created.
Fix: If they used an organizational account, they can add you to the team; or they can grant you individual
access to the resource group.

'Asset not found' on opening Application Insights from Visual Studio


The 'Open Application Insights' menu command takes me to the Azure portal, but I get an 'asset not found' error.
Likely causes:
The Application Insights resource for your application has been deleted; or
The instrumentation key was set or changed in [Link] by editing it directly, without updating
the project file.
The instrumentation key in [Link] controls where the telemetry is sent. A line in the project file
controls which resource is opened when you use the command in Visual Studio.
Fix:
In Solution Explorer, right-click the project and choose Application Insights, Configure Application Insights. In
the dialog, you can either choose to send telemetry to an existing resource, or create a new one. Or:
Open the resource directly. Sign in to the Azure portal, click Application Insights on the left navigation bar, and
then select your app.

Where do I find my telemetry?


I signed in to the Microsoft Azure portal, and I'm looking at the Azure home dashboard. So where do I find my
Application Insights data?
On the left navigation bar, click Application Insights, then your app name. If you don't have any projects
there, you need to add or configure Application Insights in your web project.
There you'll see some summary charts. You can click through them to see more detail.
In Visual Studio, while you're debugging your app, click the Application Insights button.

No server data (or no data at all)


I ran my app and then opened the Application Insights service in Microsoft Azure, but all the charts show 'Learn
how to collect...' or 'Not configured.' Or, only Page View and user data, but no server data.
Run your application in debug mode in Visual Studio (F5). Use the application so as to generate some
telemetry. Check that you can see events logged in the Visual Studio output window.

In the Application Insights portal, open Diagnostic Search. Data usually appears here first.
Click the Refresh button. The blade refreshes itself periodically, but you can also do it manually. The refresh
interval is longer for larger time ranges.
Check the instrumentation keys match. On the main blade for your app in the Application Insights portal, in
the Essentials drop-down, look at Instrumentation key. Then, in your project in Visual Studio, open
[Link] and find the <instrumentationkey> . Check that the two keys are equal. If not:
In the portal, click Application Insights and look for the app resource with the right key; or
In Visual Studio Solution Explorer, right-click the project and choose Application Insights, Configure.
Reset the app to send telemetry to the right resource.
If you can't find the matching keys, check that you are using the same sign-in credentials in Visual
Studio as in to the portal.
In the Microsoft Azure home dashboard, look at the Service Health map. If there are some alert indications, wait
until they have returned to OK and then close and re-open your Application Insights application blade.
Check also our status blog.
Did you write any code for the server-side SDK that might change the instrumentation key in TelemetryClient
instances or in TelemetryContext ? Or did you write a filter or sampling configuration that might be filtering out
too much?
If you edited [Link], carefully check the configuration of TelemetryInitializers and
TelemetryProcessors. An incorrectly-named type or parameter can cause the SDK to send no data.

No data on Page Views, Browsers, Usage


I see data in Server Response Time and Server Requests charts, but no data in Page View Load time, or in the
Browser or Usage blades.
The data comes from scripts in the web pages.
If you added Application Insights to an existing web project, you have to add the scripts by hand.
Make sure Internet Explorer isn't displaying your site in Compatibility mode.
Use the browser's debug feature (F12 on some browsers, then choose Network) to verify that data is being sent
to [Link] .

No dependency or exception data


See dependency telemetry and exception telemetry.

No performance data
Performance data (CPU, IO rate, and so on) is available for Java web services, Windows desktop apps, IIS web apps
and services if you install status monitor, and Azure Cloud Services. you'll find it under Settings, Servers.

No (server) data since I published the app to my server


Check that you actually copied all the Microsoft. ApplicationInsights DLLs to the server, together with
[Link]
In your firewall, you might have to open some TCP ports.
If you have to use a proxy to send out of your corporate network, set defaultProxy in [Link]
Windows Server 2008: Make sure you have installed the following updates: KB2468871, KB2533523,
KB2600217.

I used to see data, but it has stopped


Check the status blog.
Have you hit your monthly quota of data points? Open the Settings/Quota and Pricing to find out. If so, you can
upgrade your plan, or pay for additional capacity. See the pricing scheme.

I don't see all the data I'm expecting


If your application sends a lot of data and you are using the Application Insights SDK for [Link] version 2.0.0-
beta3 or later, the adaptive sampling feature may operate and send only a percentage of your telemetry.
You can disable it, but this is not recommended. Sampling is designed so that related telemetry is correctly
transmitted, for diagnostic purposes.

Wrong geographical data in user telemetry


The city, region, and country dimensions are derived from IP addresses and aren't always accurate.

Exception "method not found" on running in Azure Cloud Services


Did you build for .NET 4.6? 4.6 is not automatically supported in Azure Cloud Services roles. Install 4.6 on each role
before running your app.

Still not working...


Application Insights forum
Snapshot Debugger: Troubleshooting Guide
11/29/2017 • 6 min to read • Edit Online

Application Insights Snapshot Debugger allows you to automatically collect a debug snapshot from live web
applications. The snapshot shows the state of source code and variables at the moment an exception was thrown. If
you are having difficulty getting the Application Insights snapshot debugger up and running this article walks you
through how the debugger works, along with solutions to common troubleshooting scenarios.

How does Application Insights Snapshot Debugger work


Application Insights Snapshot Debugger is part of the Application Insights telemetry pipeline (an instance of
ITelemetryProcessor), the snapshot collector monitors both the exceptions thrown in your code
([Link]) and the exceptions that get tracked by the Application Insights Exception
Telemetry pipeline. Once you have successfully added the snapshot collector to your project, and it has detected
one exception in the Application Insights telemetry pipeline, an Application Insights custom event with the name
'AppInsightsSnapshotCollectorLogs' and 'SnapshotCollectorEnabled' in the Custom Data will be sent. At the same
time, it will start a process with the name of '[Link]', to upload the collected snapshot data files to
Application Insights. When the '[Link]' process starts, a custom event with the name
'UploaderStart' will be sent. After the previous steps, the snapshot collector will enter its normal monitoring
behavior.
While the snapshot collector is monitoring Application Insights exception telemetry, it uses the parameters (for
example, ThresholdForSnapshotting, MaximumSnapshotsRequired, MaximumCollectionPlanSize,
ProblemCounterResetInterval) defined in the configuration to determine when to collect a snapshot. When all the
rules are met, the collector will request a snapshot for the next exception thrown at the same place. Simultaneously,
an Application Insights custom event with the name 'AppInsightsSnapshotCollectorLogs' and 'RequestSnapshots'
will be sent. Since the compiler will optimize 'Release' code, local variables may not be visible in the collected
snapshot. The snapshot collector will try to deoptimize the method that threw the exception, when it requests
snapshots. During this time, an Application Insights custom event with name 'AppInsightsSnapshotCollectorLogs'
and 'ProductionBreakpointsDeOptimizeMethod' in the custom data will be sent. When the snapshot of the next
exception is collected, the local variables will be available. After the snapshot is collected, it will reoptimize the code
to ensure the performance.

NOTE
Deoptimization requires the Application Insights site extension to be installed.

When a snapshot is requested for a specific exception, the snapshot collector will start monitoring your
application's exception handling pipeline ([Link]). When the exception happens again,
the collector will start a snapshot (Application Insights custom event with the name
'AppInsightsSnapshotCollectorLogs' and 'SnapshotStart' in the custom data). Then a shadow copy of the running
process is made (the page table will be duplicated). This normally will take 10 to 20 milliseconds. After this, an
Application Insights custom event with the name 'AppInsightsSnapshotCollectorLogs' and 'SnapshotStop' in the
custom data will be sent. When the forked process is created, the total paged memory will be increased by the
same amount as the paged memory of your running application (the working set will be much smaller). While your
application process is running normally, the shadow copied process's memory will be dumped to disk and
uploaded to Application Insights. After the snapshot is uploaded, an Application Insights custom event with the
name 'UploadSnapshotFinish' will be sent.
Is the snapshot collector working properly?
How to find Snapshot Collector logs
Snapshot collector logs are sent to your Application Insight account if the Snapshot Collector NuGet package is
version 1.1.0 or later is installed. Make sure the ProvideAnonymousTelemetry is not set to false (the value is true by
default).
Navigate to your Application Insights resource in the Azure portal.
Click Search in the Overview section.
Enter the following string into the search box:
AppInsightsSnapshotCollectorLogs OR AppInsightsSnapshotUploaderLogs OR UploadSnapshotFinish OR UploaderStart
OR UploaderStop
Note: change the Time range if needed.

Examine Snapshot collector logs


When searching for Snapshot Collector logs, there should be 'UploadSnapshotFinish' events in the targeted time
range. If you still don't see the 'Open Debug Snapshot' button to open the Snapshot, please send email to
snapshothelp@[Link] with your Application Insights' Instrumentation Key.
I cannot find a snapshot to Open
If the following steps don't help you solve the issue, please send email to snapshothelp@[Link] with your
Application Insights' Instrumentation Key.
Step 1: Make sure your application is sending telemetry data and exception data to Application Insights
Navigate to Application Insights resource, check that there is data sent from your application.
Step 2: Make sure Snapshot collector is added correctly to your application's Application Insights Telemetry
pipeline
If you can find logs in the 'How to find Snapshot Collector logs' step, the snapshot collector is correctly added to
your project, you can ignore this step.
If there are no Snapshot collector logs, verify the following:
For classic [Link] applications, check for this line in the [Link] file.
For [Link] Core applications, make sure the ITelemetryProcessorFactory with
SnapshotCollectorTelemetryProcessor is added to IServiceCollection services.
Also check that you're using the correct instrumentation key in your published application.
The Snapshot collector doesn't support multiple instrumentation keys within the one application, it will send
snapshots to the instrumentation key of the first exception it observes.
If you set the InstrmentationKey manually in your code, please update the InstrumentationKey element from
the [Link].
Step 3: Make sure the minidump uploader is started
In the snapshot collector logs, search for UploaderStart (type UploaderStart in the search text box). There should be
an event when the snapshot collector monitored the first exception. If this event doesn't exist, check other logs for
details. One possible way for solving this issue is restarting your application.
Step 4: Make sure Snapshot Collector expressed its intent to collect snapshots
In the snapshot collector logs, search for RequestSnapshots (type RequestSnapshots in the search text box). If there
isn't any, please check your configuration, e.g. ThresholdForSnapshotting which indicates the number of a specific
exceptions that can occur before it starts collecting snapshots.
Step 5: Make sure that Snapshot is not disabled due to Memory Protection
To protect your application's performance, a snapshot will only be captured when there is a good memory buffer.
In the snapshot collector logs, search for 'CannotSnapshotDueToMemoryUsage'. In the event's custom data, it will
have a detailed reason. If your application is running in an Azure Web App, the restriction may be strict. Since Azure
Web App will restart your app when certain memory rules are met. You can try to scale up your service plan to
machines with more memory to solve this issue.
Step 6: Make sure snapshots were captured
In the snapshot collector logs, search for RequestSnapshots . If none are present, check your configuration, e.g.
ThresholdForSnapshotting this indicates the number of a specific exceptions before collecting a snapshot.

Step 7: Make sure snapshots are uploaded correctly


In the snapshot collector logs, search for UploadSnapshotFinish . If this is not present, please send email to
snapshothelp@[Link] with your Application Insights' Instrumentation Key. If this event exists, open one of
the logs and copy the 'SnapshotId' value in the Custom Data. Then search for the value. This will find the exception
corresponding to the snapshot. Then click the exception and open debug snapshot. (If there is no corresponding
exception, the exception telemetry may be sampled, due to high volume, please try another snapshotId.)
Snapshot View Local Variables are not complete
Some of the local variables are missing. If your application is running release code, the compiler will optimize some
variables away. For example:

const int a = 1; // a will be discarded by compiler and the value 1 will be inline.
Random rand = new Random();
int b = [Link]() % 300; // b will be discarded and the value will be directly put to the
'FindNthPrimeNumber' call stack.
long primeNumber = FindNthPrimeNumber(b);

If your case is different, it could be a bug. Please send email to snapshothelp@[Link] with your Application
Insights' Instrumentation Key along with the code snippet.

Snapshot View: Cannot obtain value of the local variable or argument


Please make sure the Application Insights site extension is installed. If the issue persists, please send email to
snapshothelp@[Link] with your Application Insights' Instrumentation Key.
Troubleshoot Analytics in Application Insights
11/1/2017 • 2 min to read • Edit Online

Problems with Application Insights Analytics? Start here. Analytics is the powerful search tool of Azure Application
Insights.

Limits
At present, query results are limited to just over a week of past data.
Browsers we test on: latest editions of Chrome, Edge, and Internet Explorer.

Known incompatible browser extensions


Ghostery
Disable the extension or use a different browser.

"Unexpected error"

Internal error occurred during portal runtime – unhandled exception.


Clean the browser's cache.

403 ... please try to reload

An authentication related error occurred (during authentication or during access token generation). The portal may
have no way to recover without changing browser settings.
Verify third party cookies are enabled in the browser.
403 ... verify security zone

An authentication related error occurred (during authentication or during access token generation). The portal may
have no way to recover without changing browser settings.
1. Verify third party cookies are enabled in the browser.
2. Did you use a favorite, bookmark or saved link to open the Analytics portal? Are you signed in with different
credentials than you used when you saved the link?
3. Try using an in-private/incognito browser window (after closing all such windows). You'll have to provide your
credentials.
4. Open another (ordinary) browser window and go to Azure. Sign out. Then open your link and sign in with the
correct credentials.
5. Edge and Internet Explorer users can also get this error when trusted zone settings are not supported.
Verify both Analytics portal and Azure Active Directory portal are in the same security zone:
In Internet Explorer, open Internet Options, Security, Trusted sites, Sites:
In the Websites list, if any of the following URLs are included, make sure that the others are included
also:
[Link]
[Link]
[Link]

404 ... Resource not found

Application resource was deleted from Application Insights and isn’t available anymore. This can happen if you
saved the URL to the Analytics page.

403 ... No authorization


You don't have permission to open this application in Analytics.
Did you get the link from someone else? Ask them to make sure you are in the readers or contributors for this
resource group.
Did you save the link using different credentials? Open the Azure portal, sign out, and then try this link again,
providing the correct credentials.

403 ... HTML5 Storage


Our portal uses HTML5 localStorage and sessionStorage.
Chrome: Settings, privacy, content settings.
Internet Explorer: Internet Options, Advanced tab, Security, Enable DOM Storage

404 ... Subscription not found

The URL is invalid.


Open the app resource in Application Insights portal. Then use the Analytics button.

404 ... page doesn't exist


The URL is invalid.
Open the app resource in Application Insights portal. Then use the Analytics button.

Enable third-party cookies


See how to disable third party cookies, but notice we need to enable them.

Analytics
Overview
Tour of Analytics
Start here. A tutorial covering the main features.
Queries
Use operators such as where and count to build queries.
Aggregation
Used to compute statistics over groups of records
Scalars
Numbers, strings, and other expressions used to form query parameters.
Using Analytics
Using Analytics.
Language Reference
One-page reference.
Troubleshooting
Troubleshooting and Q and A for Application Insights
for Java
11/6/2017 • 5 min to read • Edit Online

Questions or problems with Azure Application Insights in Java? Here are some tips.

Build errors
In Eclipse, when adding the Application Insights SDK via Maven or Gradle, I get build or checksum
validation errors.
If the dependency element is using a pattern with wildcard characters (e.g. (Maven) <version>[1.0,)</version>
or (Gradle) version:'1.0.+' ), try specifying a specific version instead like 1.0.2 . See the release notes for the
latest version.

No data
I added Application Insights successfully and ran my app, but I've never seen data in the portal.
Wait a minute and click Refresh. The charts refresh themselves periodically, but you can also refresh manually.
The refresh interval depends on the time range of the chart.
Check that you have an instrumentation key defined in the [Link] file (in the resources folder
in your project)
Verify that there is no <DisableTelemetry>true</DisableTelemetry> node in the xml file.
In your firewall, you might have to open TCP ports 80 and 443 for outgoing traffic to
[Link]. See the full list of firewall exceptions
In the Microsoft Azure start board, look at the service status map. If there are some alert indications, wait until
they have returned to OK and then close and re-open your Application Insights application blade.
Turn on logging to the IDE console window, by adding an <SDKLogger /> element under the root node in the
[Link] file (in the resources folder in your project), and check for entries prefaced with [Error].
Make sure that the correct [Link] file has been successfully loaded by the Java SDK, by looking
at the console's output messages for a "Configuration file has been successfully found" statement.
If the config file is not found, check the output messages to see where the config file is being searched for, and
make sure that the [Link] is located in one of those search locations. As a rule of thumb, you
can place the config file near the Application Insights SDK JARs. For example: in Tomcat, this would mean the
WEB-INF/lib folder.
I used to see data, but it has stopped
Check the status blog.
Have you hit your monthly quota of data points? Open Settings/Quota and Pricing to find out. If so, you can
upgrade your plan, or pay for additional capacity. See the pricing scheme.
I don't see all the data I'm expecting
Open the Quotas and Pricing blade and check whether sampling is in operation. (100% transmission means
that sampling isn't in operation.) The Application Insights service can be set to accept only a fraction of the
telemetry that arrives from your app. This helps you keep within your monthly quota of telemetry.

No usage data
I see data about requests and response times, but no page view, browser, or user data.
You successfully set up your app to send telemetry from the server. Now your next step is to set up your web
pages to send telemetry from the web browser.
Alternatively, if your client is an app in a phone or other device, you can send telemetry from there.
Use the same instrumentation key to set up both your client and server telemetry. The data will appear in the same
Application Insights resource, and you'll be able to correlate events from client and server.

Disabling telemetry
How can I disable telemetry collection?
In code:

TelemetryConfiguration config = [Link]();


[Link](true);

Or
Update [Link] (in the resources folder in your project). Add the following under the root node:

<DisableTelemetry>true</DisableTelemetry>

Using the XML method, you have to restart the application when you change the value.

Changing the target


How can I change which Azure resource my project sends data to?
Get the instrumentation key of the new resource.
If you added Application Insights to your project using the Azure Toolkit for Eclipse, right click your web project,
select Azure, Configure Application Insights, and change the key.
Otherwise, update the key in [Link] in the resources folder in your project.

Debug data from the SDK


How can I find out what the SDK is doing?
To get more information about what's happening in the API, add <SDKLogger/> under the root node of the
[Link] configuration file.
You can also instruct the logger to output to a file:

<SDKLogger type="FILE">
<enabled>True</enabled>
<UniquePrefix>JavaSDKLog</UniquePrefix>
</SDKLogger>

The files can be found under %temp%\javasdklogs or [Link] in case of Tomcat server.

The Azure start screen


I'm looking at the Azure portal. Does the map tell me something about my app?
No, it shows the health of Azure servers around the world.
From the Azure start board (home screen), how do I find data about my app?
Assuming you set up your app for Application Insights, click Browse, select Application Insights, and select the app
resource you created for your app. To get there faster in future, you can pin your app to the start board.

Intranet servers
Can I monitor a server on my intranet?
Yes, provided your server can send telemetry to the Application Insights portal through the public internet.
In your firewall, you might have to open TCP ports 80 and 443 for outgoing traffic to [Link]
and [Link].

Data retention
How long is data retained in the portal? Is it secure?
See Data retention and privacy.

Debug logging
Application Insights uses [Link] . This is relocated within Application Insights core jars under the
namespace [Link] . This enables Application Insights to handle
scenarios where different versions of the same [Link] exist in one code base.

NOTE
If you enable DEBUG level logging for all namespaces in the app, it will be honored by all executing modules including
[Link] renamed as [Link] . Application Insights will
not be able to apply filtering for these calls because the log call is being made by the Apache library. DEBUG level logging
produce a considerable amount of log data and is not recommended for live production instances.

Next steps
I set up Application Insights for my Java server app. What else can I do?
Monitor availability of your web pages
Monitor web page usage
Track usage and diagnose issues in your device apps
Write code to track usage of your app
Capture diagnostic logs

Get help
Stack Overflow
Troubleshoot usage analytics in Application Insights
1/17/2018 • 3 min to read • Edit Online

Have questions about the usage analytics tools in Application Insights: Users, Sessions, Events, Funnels, User Flows,
Retention, or Cohorts? Here are some answers.

Counting Users
The usage analytics tools show that my app has one user/session, but I know my app has many
users/sessions. How can I fix these incorrect counts?
All telemetry events in Application Insights have an anonymous user ID and a session ID as two of their standard
properties. By default, all of the usage analytics tools count users and sessions based on these IDs. If these standard
properties aren't being populated with unique IDs for each user and session of your app, you'll see an incorrect
count of users and sessions in the usage analytics tools.
If you're monitoring a web app, the easiest solution is to add the Application Insights JavaScript SDK to your app,
and make sure the script snippet is loaded on each page you want to monitor. The JavaScript SDK automatically
generates anonymous user and session IDs, then populates telemetry events with these IDs as they're sent from
your app.
If you're monitoring a web service (no user interface), create a telemetry initializer that populates the anonymous
user ID and session ID properties according to your service's notions of unique users and sessions.
If your app is sending authenticated user IDs, you can count based on authenticated user IDs in the Users tool. In
the "Show" dropdown, choose "Authenticated users."
The usage analytics tools don't currently support counting users or sessions based on properties other than
anonymous user ID, authenticated user ID, or session ID.

Naming Events
My app has thousands of different page view and custom event names. It's hard to distinguish between
them, and the usage analytics tools often become unresponsive. How can I fix these naming issues?
Page view and custom event names are used throughout the usage analytics tools. Naming events well is critical to
getting value from these tools. The goal is a balance between having too few, overly generic names ("Button
clicked") and having too many, overly specific names ("Edit button clicked on [Link]
To make any changes to the page view and custom event names your app is sending, you need to change your
app's source code and redeploy. All telemetry data in Application Insights is stored for 90 days and cannot
be deleted, so changes you make to event names will take 90 days to fully manifest. For the 90 days after making
name changes, both the old and new event names will show up in your telemetry, so adjust queries and
communicate within your teams, accordingly.
If your app is sending too many page view names, check whether these page view names are specified manually in
code or if they're being sent automatically by the Application Insights JavaScript SDK:
If the page view names are manually specified in code using the trackPageView API, change the name to be
less specific. Avoid common mistakes like putting the URL in the name of the page view. Instead, use the
URL parameter of the trackPageView API. Move other details from the page view name into custom
properties.
If the Application Insights JavaScript SDK is automatically sending page view names, you can either change
your pages' titles or switch to manually sending page view names. The SDK sends the title of each page as
the page view name, by default. You could change your titles to be more general, but be mindful of SEO and
other impacts this change could have. Manually specifying page view names with the trackPageView API
overrides the automatically collected names, so you could send more general names in telemetry without
changing page titles.
If your app is sending too many custom event names, change the name in the code to be less specific. Again, avoid
putting URLs and other per-page or dynamic information in the custom event names directly. Instead, move these
details into custom properties of the custom event with the trackEvent API. For example, instead of
[Link]("Edit button clicked on [Link] , we suggest something like
[Link]("Edit button clicked", { "Source URL": "[Link] }) .

Next steps
Usage analytics overview

Get help
Stack Overflow
Application Insights telemetry data model
11/1/2017 • 2 min to read • Edit Online

Azure Application Insights sends telemetry from your web application to the Azure portal, so that you can analyze
the performance and usage of your application. The telemetry model is standardized so that it is possible to create
platform and language-independent monitoring.
Data collected by Application Insights models this typical application execution pattern:

The following types of telemetry are used to monitor the execution of your app. The following three types are
typically automatically collected by the Application Insights SDK from the web application framework:
Request - Generated to log a request received by your app. For example, the Application Insights web SDK
automatically generates a Request telemetry item for each HTTP request that your web app receives.
An Operation is the threads of execution that processes a request. You can also write code to monitor
other types of operation, such as a "wake up" in a web job or function that periodically processes data. Each
operation has an ID. This ID that can be used to group all telemetry generated while your app is processing
the request. Each operation either succeeds or fails, and has a duration of time.
Exception - Typically represents an exception that causes an operation to fail.
Dependency - Represents a call from your app to an external service or storage such as a REST API or SQL. In
[Link], dependency calls to SQL are defined by [Link] . Calls to HTTP endpoints are defined by
[Link] .

Application Insights provides three additional data types for custom telemetry:
Trace - used either directly, or through an adapter to implement diagnostics logging using an instrumentation
framework that is familiar to you, such as Log4Net or [Link] .
Event - typically used to capture user interaction with your service, to analyze usage patterns.
Metric - used to report periodic scalar measurements.
Every telemetry item can define the context information like application version or user session id. Context is a set
of strongly typed fields that unblocks certain scenarios. When application version is properly initialized,
Application Insights can detect new patterns in application behavior correlated with redeployment. Session id can
be used to calculate the outage or an issue impact on users. Calculating distinct count of session id values for
certain failed dependency, error trace or critical exception gives a good understanding of an impact.
Application Insights telemetry model defines a way to correlate telemetry to the operation of which it’s a part. For
example, a request can make a SQL Database calls and recorded diagnostics info. You can set the correlation
context for those telemetry items that tie it back to the request telemetry.

Schema improvements
Application Insights data model is a simple and basic yet powerful way to model your application telemetry. We
strive to keep the model simple and slim to support essential scenarios and allow to extend the schema for
advanced use.
To report data model or schema problems and suggestions use GitHub ApplicationInsights-Home repository.

Next steps
Write custom telemetry
Learn how to extend and filter telemetry.
Use sampling to minimize amount of telemetry based on data model.
Check out platforms supported by Application Insights.
Request telemetry: Application Insights data model
11/1/2017 • 3 min to read • Edit Online

A request telemetry item (in Application Insights) represents the logical sequence of execution triggered by an
external request to your application. Every request execution is identified by unique ID and url containing all the
execution parameters. You can group requests by logical name and define the source of this request. Code
execution can result in success or fail and has a certain duration . Both success and failure executions may be
grouped further by resultCode . Start time for the request telemetry defined on the envelope level.
Request telemetry supports the standard extensibility model using custom properties and measurements .

Name
Name of the request represents code path taken to process the request. Low cardinality value to allow better
grouping of requests. For HTTP requests it represents the HTTP method and URL path template like
GET /values/{id} without the actual id value.

Application Insights web SDK sends request name "as is" with regards to letter case. Grouping on UI is case-
sensitive so GET /Home/Index is counted separately from GET /home/INDEX even though often they result in the
same controller and action execution. The reason for that is that urls in general are case-sensitive. You may want to
see if all 404 happened for the urls typed in uppercase. You can read more on request name collection by [Link]
Web SDK in the blog post.
Max length: 1024 characters

ID
Identifier of a request call instance. Used for correlation between request and other telemetry items. ID should be
globally unique. For more information, see correlation page.
Max length: 128 characters

Url
Request URL with all query string parameters.
Max length: 2048 characters

Source
Source of the request. Examples are the instrumentation key of the caller or the ip address of the caller. For more
information, see correlation page.
Max length: 1024 characters

Duration
Request duration in format: [Link]:MM:[Link] . Must be positive and less than 1000 days. This field is required as
request telemetry represents the operation with the beginning and the end.

Response code
Result of a request execution. HTTP status code for HTTP requests. It may be HRESULT value or exception type for
other request types.
Max length: 1024 characters

Success
Indication of successful or unsuccessful call. This field is required. When not set explicitly to false - request
considered to be successful. Set this value to false if operation was interrupted by exception or returned error
result code.
For the web applications, Application Insights define request as failed when the response code is less the 400 or
equal to 401 . However there are cases when this default mapping does not match the semantic of the application.
Response code 404 may indicate "no records", which can be part of regular flow. It also may indicate a broken
link. For the broken links, you can even implement more advanced logic. You can mark broken links as failures only
when those links are located on the same site by analyzing url referrer. Or mark them as failures when accessed
from the company's mobile application. Similarly 301 and 302 indicates failure when accessed from the client
that doesn't support redirect.
Partially accepted content 206 may indicate a failure of an overall request. For instance, Application Insights
endpoint receives a batch of telemetry items as a single request. It returns 206 when some items in the batch were
not processed successfully. Increasing rate of 206 indicates a problem that needs to be investigated. Similar logic
applies to 207 Multi-Status where the success may be the worst of separate response codes.
You can read more on request result code and status code in the blog post.

Custom properties
Name-value collection of custom properties. This collection is used to extend standard telemetry with the custom
dimensions. Examples are deployment slot that produced telemetry or telemetry-item specific property like order
number.
Max key length: 150 Max value length: 8192

Custom measurements
Collection of custom measurements. Use this collection to report named measurement associated with the
telemetry item. Typical use cases are:
the size of Dependency Telemetry payload
the number of queue items processed by Request Telemetry
time that customer took to complete the step in wizard step completion Event Telemetry.
You can query custom measurements in Application Analytics:

customEvents
| where customMeasurements != ""
| summarize avg(todouble(customMeasurements["Completion Time"]) * itemCount)

NOTE
Custom measurements are associated with the telemetry item they belong to. They are subject to sampling with the
telemetry item containing those measurements. To track a measurement that has a value independent from other telemetry
types, use Metric telemetry.
Max key length: 150

Next steps
Write custom request telemetry
See data model for Application Insights types and data model.
Learn how to configure [Link] Core application with Application Insights.
Check out platforms supported by Application Insights.
Dependency telemetry: Application Insights data
model
11/1/2017 • 1 min to read • Edit Online

Dependency Telemetry (in Application Insights) represents an interaction of the monitored component with a
remote component such as SQL or an HTTP endpoint.

Name
Name of the command initiated with this dependency call. Low cardinality value. Examples are stored procedure
name and URL path template.

ID
Identifier of a dependency call instance. Used for correlation with the request telemetry item corresponding to this
dependency call. For more information, see correlation page.

Data
Command initiated by this dependency call. Examples are SQL statement and HTTP URL with all query parameters.

Type
Dependency type name. Low cardinality value for logical grouping of dependencies and interpretation of other
fields like commandName and resultCode. Examples are SQL, Azure table, and HTTP.

Target
Target site of a dependency call. Examples are server name, host address. For more information, see correlation
page.

Duration
Request duration in format: [Link]:MM:[Link] . Must be less than 1000 days.

Result code
Result code of a dependency call. Examples are SQL error code and HTTP status code.

Success
Indication of successful or unsuccessful call.

Custom properties
Name-value collection of custom properties. This collection is used to extend standard telemetry with the custom
dimensions. Examples are deployment slot that produced telemetry or telemetry-item specific property like order
number.
Max key length: 150 Max value length: 8192
Custom measurements
Collection of custom measurements. Use this collection to report named measurement associated with the
telemetry item. Typical use cases are:
the size of Dependency Telemetry payload
the number of queue items processed by Request Telemetry
time that customer took to complete the step in wizard step completion Event Telemetry.
You can query custom measurements in Application Analytics:

customEvents
| where customMeasurements != ""
| summarize avg(todouble(customMeasurements["Completion Time"]) * itemCount)

NOTE
Custom measurements are associated with the telemetry item they belong to. They are subject to sampling with the
telemetry item containing those measurements. To track a measurement that has a value independent from other telemetry
types, use Metric telemetry.

Max key length: 150

Next steps
Set up dependency tracking for .NET.
Set up dependency tracking for Java.
Write custom dependency telemetry
See data model for Application Insights types and data model.
Check out platforms supported by Application Insights.
Exception telemetry: Application Insights data model
11/1/2017 • 1 min to read • Edit Online

In Application Insights, an instance of Exception represents a handled or unhandled exception that occurred during
execution of the monitored application.

Problem Id
Identifier of where the exception was thrown in code. Used for exceptions grouping. Typically a combination of
exception type and a function from the call stack.
Max length: 1024 characters

Severity level
Trace severity level. Value can be Verbose , Information , Warning , Error , Critical .

Exception details
(To be extended)

Custom properties
Name-value collection of custom properties. This collection is used to extend standard telemetry with the custom
dimensions. Examples are deployment slot that produced telemetry or telemetry-item specific property like order
number.
Max key length: 150 Max value length: 8192

Custom measurements
Collection of custom measurements. Use this collection to report named measurement associated with the
telemetry item. Typical use cases are:
the size of Dependency Telemetry payload
the number of queue items processed by Request Telemetry
time that customer took to complete the step in wizard step completion Event Telemetry.
You can query custom measurements in Application Analytics:

customEvents
| where customMeasurements != ""
| summarize avg(todouble(customMeasurements["Completion Time"]) * itemCount)

NOTE
Custom measurements are associated with the telemetry item they belong to. They are subject to sampling with the
telemetry item containing those measurements. To track a measurement that has a value independent from other telemetry
types, use Metric telemetry.
Max key length: 150

Next steps
See data model for Application Insights types and data model.
Learn how to diagnose exceptions in your web apps with Application Insights.
Check out platforms supported by Application Insights.
Trace telemetry: Application Insights data model
11/1/2017 • 1 min to read • Edit Online

Trace telemetry (in Application Insights) represents printf style trace statements that are text-searched. Log4Net ,
NLog , and other text-based log file entries are translated into instances of this type. The trace does not have
measurements as an extensibility.

Message
Trace message.
Max length: 32768 characters

Severity level
Trace severity level. Value can be Verbose , Information , Warning , Error , Critical .

Custom properties
Name-value collection of custom properties. This collection is used to extend standard telemetry with the custom
dimensions. Examples are deployment slot that produced telemetry or telemetry-item specific property like order
number.
Max key length: 150 Max value length: 8192

Next steps
Explore .NET trace logs in Application Insights.
Explore Java trace logs in Application Insights.
See data model for Application Insights types and data model.
Write custom trace telemetry
Check out platforms supported by Application Insights.
Event telemetry: Application Insights data model
11/1/2017 • 1 min to read • Edit Online

You can create event telemetry items (in Application Insights) to represent an event that occurred in your
application. Typically it is a user interaction such as button click or order checkout. It can also be an application life
cycle event like initialization or configuration update.
Semantically, events may or may not be correlated to requests. However, if used properly, event telemetry is more
important than requests or traces. Events represent business telemetry and should be a subject to separate, less
aggressive sampling.

Name
Event name. To allow proper grouping and useful metrics, restrict your application so that it generates a small
number of separate event names. For example, don't use a separate name for each generated instance of an event.
Max length: 512 characters

Custom properties
Name-value collection of custom properties. This collection is used to extend standard telemetry with the custom
dimensions. Examples are deployment slot that produced telemetry or telemetry-item specific property like order
number.
Max key length: 150 Max value length: 8192

Custom measurements
Collection of custom measurements. Use this collection to report named measurement associated with the
telemetry item. Typical use cases are:
the size of Dependency Telemetry payload
the number of queue items processed by Request Telemetry
time that customer took to complete the step in wizard step completion Event Telemetry.
You can query custom measurements in Application Analytics:

customEvents
| where customMeasurements != ""
| summarize avg(todouble(customMeasurements["Completion Time"]) * itemCount)

NOTE
Custom measurements are associated with the telemetry item they belong to. They are subject to sampling with the
telemetry item containing those measurements. To track a measurement that has a value independent from other telemetry
types, use Metric telemetry.

Max key length: 150

Next steps
See data model for Application Insights types and data model.
Write custom event telemetry
Check out platforms supported by Application Insights.
Metric telemetry: Application Insights data model
11/10/2017 • 2 min to read • Edit Online

There are two types of metric telemetry supported by Application Insights: single measurement and pre-
aggregated metric. Single measurement is just a name and value. Pre-aggregated metric specifies minimum and
maximum value of the metric in the aggregation interval and standard deviation of it.
Pre-aggregated metric telemetry assumes that aggregation period was one minute.
There are several well-known metric names supported by Application Insights. These metrics placed into
performanceCounters table.
Metric representing system and process counters:

.NET NAME PLATFORM AGNOSTIC NAME REST API NAME DESCRIPTION

\Processor(_Total)\% Work in progress... processorCpuPercentage total machine CPU


Processor Time

\Memory\Available Bytes Work in progress... memoryAvailableBytes memory available on disk

\Process(?? Work in progress... processCpuPercentage CPU of the process hosting


APP_WIN32_PROC??)\% the application
Processor Time

\Process(?? Work in progress... processPrivateBytes memory used by the process


APP_WIN32_PROC??)\Private hosting the application
Bytes

\Process(?? Work in progress... processIOBytesPerSecond rate of I/O operations runs


APP_WIN32_PROC??)\IO by process hosting the
Data Bytes/sec
application

\[Link] Applications(?? Work in progress... requestsPerSecond rate of requests processed


APP_W3SVC_PROC??)\Requests/Sec by application

\.NET CLR Exceptions(?? Work in progress... exceptionsPerSecond rate of exceptions thrown by


APP_CLR_PROC??)\# of application
Exceps Thrown / sec

\[Link] Applications(?? Work in progress... requestExecutionTime average requests execution


APP_W3SVC_PROC??)\Request time
Execution Time

\[Link] Applications(?? Work in progress... requestsInQueue number of requests waiting


APP_W3SVC_PROC??)\Requests for the processing in a
In Application Queue
queue

Name
Name of the metric you'd like to see in Application Insights portal and UI.

Value
Single value for measurement. Sum of individual measurements for the aggregation.

Count
Metric weight of the aggregated metric. Should not be set for a measurement.

Min
Minimum value of the aggregated metric. Should not be set for a measurement.

Max
Maximum value of the aggregated metric. Should not be set for a measurement.

Standard deviation
Standard deviation of the aggregated metric. Should not be set for a measurement.

Custom properties
Metric with the custom property CustomPerfCounter set to true indicate that the metric represents the windows
performance counter. These metrics placed in performanceCounters table. Not in customMetrics. Also the name of
this metric is parsed to extract category, counter, and instance names.
Name-value collection of custom properties. This collection is used to extend standard telemetry with the custom
dimensions. Examples are deployment slot that produced telemetry or telemetry-item specific property like order
number.
Max key length: 150 Max value length: 8192

Next steps
Learn how to use Application Insights API for custom events and metrics.
See data model for Application Insights types and data model.
Check out platforms supported by Application Insights.
Telemetry context: Application Insights data model
6/27/2017 • 3 min to read • Edit Online

Every telemetry item may have a strongly typed context fields. Every field enables a specific monitoring scenario.
Use the custom properties collection to store custom or application-specific contextual information.

Application version
Information in the application context fields is always about the application that is sending the telemetry.
Application version is used to analyze trend changes in the application behavior and its correlation to the
deployments.
Max length: 1024

Client IP address
The IP address of the client device. IPv4 and IPv6 are supported. When telemetry is sent from a service, the location
context is about the user that initiated the operation in the service. Application Insights extract the geo-location
information from the client IP and then truncate it. So client IP by itself cannot be used as end-user identifiable
information.
Max length: 46

Device type
Originally this field was used to indicate the type of the device the end user of the application is using. Today used
primarily to distinguish JavaScript telemetry with the device type 'Browser' from server-side telemetry with the
device type 'PC'.
Max length: 64

Operation id
A unique identifier of the root operation. This identifier allows to group telemetry across multiple components. See
telemetry correlation for details. The operation id is created by either a request or a page view. All other telemetry
sets this field to the value for the containing request or page view.
Max length: 128

Parent operation ID
The unique identifier of the telemetry item's immediate parent. See telemetry correlation for details.
Max length: 128

Operation name
The name (group) of the operation. The operation name is created by either a request or a page view. All other
telemetry items set this field to the value for the containing request or page view. Operation name is used for
finding all the telemetry items for a group of operations (for example 'GET Home/Index'). This context property is
used to answer questions like "what are the typical exceptions thrown on this page."
Max length: 1024

Synthetic source of the operation


Name of synthetic source. Some telemetry from the application may represent synthetic traffic. It may be web
crawler indexing the web site, site availability tests, or traces from diagnostic libraries like Application Insights SDK
itself.
Max length: 1024

Session id
Session ID - the instance of the user's interaction with the app. Information in the session context fields is always
about the end user. When telemetry is sent from a service, the session context is about the user that initiated the
operation in the service.
Max length: 64

Anonymous user id
Anonymous user id. Represents the end user of the application. When telemetry is sent from a service, the user
context is about the user that initiated the operation in the service.
Sampling is one of the techniques to minimize the amount of collected telemetry. Sampling algorithm attempts to
either sample in or out all the correlated telemetry. Anonymous user id is used for sampling score generation. So
anonymous user id should be a random enough value.
Using anonymous user id to store user name is a misuse of the field. Use Authenticated user id.
Max length: 128

Authenticated user id
Authenticated user id. The opposite of anonymous user id, this field represents the user with a friendly name. Since
its PII information it is not collected by default by most SDK.
Max length: 1024

Account id
In multi-tenant applications this is the account ID or name, which the user is acting with. Examples may be
subscription ID for Azure portal or blog name blogging platform.
Max length: 1024

Cloud role
Name of the role the application is a part of. Maps directly to the role name in azure. Can also be used to
distinguish micro services, which are part of a single application.
Max length: 256

Cloud role instance


Name of the instance where the application is running. Computer name for on-premises, instance name for Azure.
Max length: 256
Internal: SDK version
SDK version. See [Link]
[Link]#sdk-version-specification for information.
Max length: 64

Internal: Node name


This field represents the node name used for billing purposes. Use it to override the standard detection of nodes.
Max length: 256

Next steps
Learn how to extend and filter telemetry.
See data model for Application Insights types and data model.
Check out standard context properties collection configuration.
Telemetry correlation in Application Insights
11/9/2017 • 5 min to read • Edit Online

In the world of micro services, every logical operation requires work done in various components of the service.
Each of these components can be separately monitored by Application Insights. The web app component
communicates with authentication provider component to validate user credentials, and with the API component
to get data for visualization. The API component in its turn can query data from other services and use cache-
provider components and notify the billing component about this call. Application Insights supports distributed
telemetry correlation. It allows you to detect which component is responsible for failures or performance
degradation.
This article explains the data model used by Application Insights to correlate telemetry sent by multiple
components. It covers the context propagation techniques and protocols. It also covers the implementation of
the correlation concepts on different languages and platforms.

Telemetry correlation data model


Application Insights defines a data model for distributed telemetry correlation. To associate telemetry with the
logical operation, every telemetry item has a context field called operation_Id . This identifier is shared by every
telemetry item in the distributed trace. So even with loss of telemetry from a single layer you still can associate
telemetry reported by other components.
Distributed logical operation typically consists of a set of smaller operations - requests processed by one of the
components. Those operations are defined by request telemetry. Every request telemetry has its own id that
uniquely globally identifies it. And all telemetry - traces, exceptions, etc. associated with this request should set
the operation_parentId to the value of the request id .
Every outgoing operation like http call to another component represented by dependency telemetry.
Dependency telemetry also defines its own id that is globally unique. Request telemetry, initiated by this
dependency call, uses it as operation_parentId .
You can build the view of distributed logical operation using operation_Id , operation_parentId , and
[Link] with [Link] . Those fields also define the causality order of telemetry calls.

In micro services environment, traces from components may go to the different storages. Every component may
have its own instrumentation key in Application Insights. To get telemetry for the logical operation, you need to
query data from every storage. When number of storages is huge, you need to have a hint on where to look
next.
Application Insights data model defines two fields to solve this problem: [Link] and [Link]
. The first field identifies the component that initiated the dependency request, and the second identifies which
component returned the response of the dependency call.

Example
Let's take an example of an application STOCK PRICES showing the current market price of a stock using the
external API called STOCKS API. The STOCK PRICES application has a page Stock page opened by the client web
browser using GET /Home/Stock . The application queries the STOCK API by using an HTTP call
GET /api/stock/value .

You can analyze resulting telemetry running a query:


(requests | union dependencies | union pageViews)
| where operation_Id == "STYz"
| project timestamp, itemType, name, id, operation_ParentId, operation_Id

In the result view note that all telemetry items share the root operation_Id . When ajax call made from the page
- new unique id qJSXU is assigned to the dependency telemetry and pageView's id is used as
operation_ParentId . In turn server request uses ajax's id as operation_ParentId , etc.

ITEMTYPE NAME ID OPERATION_PARENTID OPERATION_ID

pageView Stock page STYz STYz

dependency GET /Home/Stock qJSXU STYz STYz

request GET Home/Stock KqKwlrSt9PA= qJSXU STYz

dependency GET /api/stock/value bBrf2L7mm2g= KqKwlrSt9PA= STYz

Now when the call GET /api/stock/value made to an external service you want to know the identity of that
server. So you can set [Link] field appropriately. When the external service does not support
monitoring - target is set to the host name of the service like [Link] . However if that service
identifies itself by returning a predefined HTTP header - target contains the service identity that allows
Application Insights to build distributed trace by querying telemetry from that service.

Correlation headers
We are working on RFC proposal for the correlation HTTP protocol. This proposal defines two headers:
Request-Id carry the globally unique id of the call
Correlation-Context - carry the name value pairs collection of the distributed trace properties

The standard also defines two schemas of Request-Id generation - flat and hierarchical. With the flat schema,
there is a well-known Id key defined for the Correlation-Context collection.
Application Insights defines the extension for the correlation HTTP protocol. It uses Request-Context name value
pairs to propagate the collection of properties used by the immediate caller or callee. Application Insights SDK
uses this header to set [Link] and [Link] fields.

Open tracing and Application Insights


Open Tracing and Application Insights data models looks
request , maps to Span with [Link] = server
pageView
dependency maps to Span with [Link] = client
id of a request and dependency maps to [Link]
operation_Id maps to TraceId
operation_ParentId maps to Reference of type ChildOf

See data model for Application Insights types and data model.
See specification and semantic_conventions for definitions of Open Tracing concepts.

Telemetry correlation in .NET


Over time .NET defined number of ways to correlate telemetry and diagnostics logs. There is
[Link] allowing to track LogicalOperationStack and ActivityId.
[Link] and Windows ETW define the method SetCurrentThreadActivityId.
ILogger uses Log Scopes. WCF and Http wire up "current" context propagation.

However those methods didn't enable automatic distributed tracing support. DiagnosticsSource is a way to
support automatic cross machine correlation. .NET libraries support Diagnostics Source and allow automatic
cross machine propagation of the correlation context via the transport like http.
The guide to Activities in Diagnostics Source explains the basics of tracking Activities.
[Link] Core 2.0 supports extraction of Http Headers and starting the new Activity.
[Link] starting version <fill in> supports automatic injection of the correlation Http Headers
and tracking the http call as an Activity.
There is a new Http Module [Link] for the [Link] Classic. This module
implements telemetry correlation using DiagnosticsSource. It starts activity based on incoming request headers.
It also correlates telemetry from the different stages of request processing. Even for the cases when every stage
of IIS processing runs on a different manage threads.
Application Insights SDK starting version 2.4.0-beta1 uses DiagnosticsSource and Activity to collect telemetry
and associate it with the current activity.

Next steps
Write custom telemetry
Onboard all components of your micro service on Application Insights. Check out supported platforms.
See data model for Application Insights types and data model.
Learn how to extend and filter telemetry.
Developer analytics: languages, platforms, and
integrations
11/15/2017 • 1 min to read • Edit Online

These items are implementations of Application Insights that we've heard about, including some by third
parties.

Languages - officially supported by Application Insights team


C#|VB (.NET)
Java
JavaScript web pages
[Link]

Languages - community-supported
PHP
Python
Ruby
Anything else

Platforms and frameworks


[Link]
[Link] - for apps that are already live
[Link] Core
Android (App Center, HockeyApp)
Azure Web Apps
Azure Cloud Services—including both web and worker roles
Azure Functions
Docker
Glimpse
iOS (App Center, HockeyApp)
J2EE
J2EE - for apps that are already live
macOS app (HockeyApp)
[Link]
OSX
Spring
Universal Windows app (App Center, HockeyApp)
WCF
Windows Phone 8 and 8.1 app (HockeyApp)
Windows Presentation Foundation app (HockeyApp)
Windows desktop applications, services, and worker roles
Anything else
Logging frameworks
Log4Net, NLog, or [Link]
Java, Log4J, or Logback
Semantic Logging (SLAB) - integrates with Semantic Logging Application Block
Cloud-based load testing
LogStash plugin
OMS Log Analytics
Logary

Content Management Systems


Concrete
Drupal
Joomla
Orchard
SharePoint
WordPress

Export and Data Analysis


Alooma
Power BI
Stream Analytics

Build your own SDK


If there isn't yet an SDK for your language or platform, perhaps you'd like to build one? Take a look at the code
of the existing SDKs listed in the Application Insights SDK project on GitHub.
SDK Release Notes - Application Insights
11/1/2017 • 1 min to read • Edit Online

Here are detailed release notes and update instructions for our SDKs:
[Link] Web Server SDK
.NET Core SDK
.NET Logging Adapters
[Link] Core
Java
JavaScript
Visual Studio tools
Other platforms
Read also our blogs and Service Updates which summarize major improvements in the Application Insights service
as a whole.
Release Notes for Developer Analytics Tools
11/1/2017 • 11 min to read • Edit Online

Version 7.18 (Visual Studio 2015)


Redesigned toast notifications.
"Not" filters in the Detail view for events in Application Insights Search.
Bug fixes

Version 8.6 (Visual Studio 2017 RTW and RC4) and Version 7.17 (Visual
Studio 2015)
Annotations marking when you publish your app from Visual Studio are now made to your data in the Metrics
Explorer in the Azure Portal
Markers are now added to scrollbars in code files, corresponding to red and yellow CodeLens warnings from
Application Insights
Updated pricing information in the Configuration window
Bug fixes
See the detailed notes here

Version 7.16 (Visual Studio 2015)


Bug fixes

Version 8.5 (Visual Studio 2017 RC3) and Version 7.15 (Visual Studio
2015)
CodeLens now shows both debug and live telemetry data in projects that send data to an Application Insights
resource
Application Insights pricing information is now shown in the Configuration window
CodeLens for requests and exceptions now supports [Link] projects written in Visual Basic
Application Insights Search now shows un-sampled event counts for events that have been sampled
Bug fixes

Version 7.14 (Visual Studio 2015)


Search support for availability (web test) and page view events
Trends support for availability (web test) and page view events
Diagnostic Tools and event details label for SDK Adaptive Sampling
Bug fixes

Version 7.12 (Visual Studio 2015)


New publish notification format
Bug fixes
Version 8.4 (Visual Studio 2017 RC2) and Version 7.11 (Visual Studio
2015)
CodeLens shows requests for local debug sessions for projects with the Application Insights SDK
CodeLens can take you directly to Application Analytics to see user impact
Insert JavaScript to collect page views
Bug fixes

Version 7.10 (Visual Studio 2015)


New design for the Application Insights Configuration window
Bug fixes

Version 7.9 (Visual Studio 2015)


CodeLens shows exceptions that have occurred during local debug sessions for projects with the Application
Insights SDK
Bug fixes

Version 8.3 (Visual Studio 2017 RC) and Version 7.8 (Visual Studio 2015)
New experience for adding Application Insights in the Configuration window
Bug fixes

Version 7.7 (Visual Studio 2015)


More accurate mappings from telemetry events to methods using custom [Link] routing
Bug fixes

Version 7.6 (Visual Studio 2015)


Analyze events involved in an operation from the new Track Operation tab on events in the Search tool
Bug fixes

Version 7.5 (Visual Studio 2015)


Production telemetry information for requests in Diagnostic Tools
Work Item creation from Related Items in the Search tool
Bug fixes

Version 7.4 (Visual Studio 2015)


The filter pane in Trends is now resizable
Bug fixes

Version 7.3 (Visual Studio 2015)


Requests in CodeLens
Configuration window
HockeyApp SDK updated to v4.2.2
Bug fixes
Version 7.2 (Visual Studio 2015)
Bug fixes

Version 7.1 (Visual Studio 2015)


Telemetry Readiness indicator in Application Insights Trends
Bug fixes

Version 7.0
Azure Application Insights Trends
Azure Application Insights is a new tool in Visual Studio that you can use to help you analyze how your app
operates over time. To get started, on the Application Insights toolbar button or in the Application Insights
Search window, choose Explore Telemetry Trends. Or, on the View menu, click Other Windows, and then click
Application Insights Trends. Choose one of five common queries to get started. You can analyze different data
sets based on telemetry types, time ranges, and other properties. To find anomalies in your data, choose one of the
anomaly options in the View Type drop-down list. The filtering options at the bottom of the window make it easy
to hone in on specific subsets of your telemetry.

Exceptions in CodeLens
Exception telemetry is now displayed in CodeLens. If you've connected your project to the Application Insights
service, you'll see the number of exceptions that have occurred in each method in production in the past 24 hours.
From CodeLens, you can jump to Search or Trends to investigate the exceptions in more detail.
[Link] Core support
Application Insights now supports [Link] Core RC2 projects in Visual Studio. You can add Application Insights to
new [Link] Core RC2 projects from the New Project dialog, as in the following screenshot. Or, you can add it to
an existing project, right-click the project in Solution Explorer, and then click Add Application Insights
Telemetry.

[Link] 5 RC1 and [Link] Core RC2 projects also have new support in the Diagnostic Tools window. You'll see
Application Insights events like requests and exceptions from your [Link] app while you debug locally on your
PC. From each event, click Search to drill down for more information.
HockeyApp for Universal Windows apps
In addition to beta distribution and user feedback, HockeyApp provides symbolicated crash reporting for your
Universal Windows apps. We've made it even easier to add the HockeyApp SDK: right-click on your Universal
Windows project, and then click Hockey App - Enable Crash Analytics. This installs the SDK, sets up crash
collection, and provisions a HockeyApp resource in the cloud, all without uploading your app to the HockeyApp
service.
Other new features:
We've made the Application Insights Search experience faster and more intuitive. Now, time ranges and detail
filters are automatically applied as you select them.
Also in Application Insights Search, now there's an option to jump to the code directly from the request
telemetry.
We've made improvements to the HockeyApp sign-in experience.
In Diagnostic Tools, production telemetry information for exceptions is displayed.

Version 5.2
We are happy to announce the introduction of HockeyApp scenarios in Visual Studio. The first integration is in beta
distribution of Universal Windows apps and Windows Forms apps from within Visual Studio.
With beta distribution, you upload early versions of your apps to HockeyApp for distribution to a selected subset of
customers or testers. Beta distribution, combined with HockeyApp crash collection and user feedback features, can
provide you with valuable information about your app before you make a broad release. You can use this
information to address issues with your app so that you can avoid or minimize future problems, such as low app
ratings, negative feedback, and so on.
Check out how simple it is to upload builds for beta distribution from within Visual Studio.
Universal Windows apps
The context menu for a Universal Windows app project node now includes an option to upload your build to
HockeyApp.

Choose the item and the HockeyApp upload dialog box opens. You will need a HockeyApp account to upload your
build. If you are a new user, don't worry. Creating an account is a simple process.
When you are connected, you will see the upload form in the dialog.

Select the content to upload (an .appxbundle or .appx file), and then choose release options in the wizard.
Optionally, you can add release notes on the next page. Choose Finish to begin the upload.
When the upload is complete, a HockeyApp notification with confirmation and a link to the app in the HockeyApp
portal appears.

That’s it! You've just uploaded a build for beta distribution with just a few clicks.
You can manage your application in numerous ways in the HockeyApp portal. This includes inviting users, viewing
crash reports and feedback, changing details, and so on.

See the HockeyApp Knowledge Base for more details about app management.
Windows Forms apps
The context menu for a Windows Form project node now includes an option to upload your build to HockeyApp.

This opens the HockeyApp upload dialog, which is similar to the one in a Universal Windows app.

Note a new field in this wizard, for specifying the version of the app. For Universal Windows apps, the information
is populated from the manifest. Windows Forms apps, unfortunately, don’t have an equivalent to this feature. You
will need to specify them manually.
The rest of the flow is similar to Universal Windows apps: choose build and release options, add release notes,
upload, and manage in the HockeyApp portal.
It’s as simple as that. Give it a try and let us know what you think.

Version 4.3
Search telemetry from local debug sessions
With this release, you can now search for Application Insights telemetry generated in the Visual Studio debug
session. Before, you could use search only if you registered your app with Application Insights. Now, your app only
needs to have the Application Insights SDK installed to search for local telemetry.
If you have an [Link] application with the Application Insights SDK, do the following steps to use Search.
1. Debug your application.
2. Open Application Insights Search in one of these ways:
On the View menu, click Other Windows, and then click Application Insights Search.
Click the Application Insights toolbar button.
In Solution Explorer, expand [Link], and then click Search debug session
telemetry.
3. If you haven't signed up with Application Insights, the Search window will open in debug session telemetry
mode.
4. Click the Search icon to see your local telemetry.

Version 4.2
In this release, we added features to make searching for data easier in the context of events, with the ability to jump
to code from more data events, and an effortless experience to send your logging data to Application Insights. This
extension is updated monthly. If you have feedback or feature requests, send it to aidevtools@[Link].
No -click logging experience
If you're already using NLog, log4net, or [Link], you don't have to worry about moving all of
your traces to Application Insights. In this release, we've integrated the Application Insights logging adapters with
the normal configuration experience. If you already have one of these logging frameworks configured, the
following section describes how to get it. If you've already added Application Insights:
1. Right-click the project node, and then click Application Insights, and then click Configure Application
Insights. Make sure that you see the option to add the correct adapter in the configuration window.
2. Alternatively, when you build the solution, note the pop-up window that appears on the top right of your screen
and click Configure.
When you have the Logging adapter installed, run your application and make sure you see the data in the
diagnostic tools tab, like this:

Jump to or find the code where the telemetry event property is emitted
With the new release user can click on any value in the event detail and this will search for a matching string in the
current open solution. Results will show up in Visual Studio "Find Results" list as shown below:
New Search window for when you are not signed in
We've improved the look of the Application Insights Search window to help you search your data while your app is
in production.

See all telemetry events associated with the event


We've added a new tab, with predefined queries for all data related to the telemetry event the user is viewing, next
to the tab for event details. For example, a request has a field called Operation ID. Every event associated to this
request has the same value for Operation ID. If an exception occurs while the operation is processing the request,
the exception is given the same operation ID as the request to make it easier to find. If you're looking at a request,
click All telemetry for this operation to open a new tab that displays the new search results.
Forward and Back history in Search
Now you can go back and forth between search results.

Version 4.1
This release comes with a number of new features and updates. You need to have Update 1 installed to install this
release.
Jump from an exception to method in source code
Now, if you view exceptions from your production app in the Application Insights Search window, you can jump to
the method in your code where the exception is occurring. You only need to have the correct project loaded and
Application Insights takes care of the rest! (To learn more about the Application Insights Search window, see the
release notes for Version 4.0 in the following sections.)
How does it work? You can use Applications Insights Search even when a solution isn't open. The stack trace area
displays an information message, and many of the items in the stack trace are unavailable.
If file information is available, some items might be links, but the solution information item will still be visible.
If you click the hyperlink, you'll jump to the location of the selected method in your code. There might be a
difference in the version number, but the feature, to jump to the correct version of the code, will come in later
releases.
New entry points to the Search experience in Solution Explorer
Now you can access Search through Solution Explorer.

Displays a notification when publish is completed


A pop-up dialog box appears when the project is published online, so that you can view your Application Insights
data in production.
Version 4.0
Search Application Insights data from within Visual Studio
Like the search function in the Application Insights portal, now in Visual Studio you can filter and search on event
types, property values, and text, and then inspect individual events.

See data coming from your local computer in Diagnostic Tools


You can view your telemetry, in addition to other debugging data, on the Visual Studio Diagnostic Tools page. Only
[Link] 4.5 is supported.
Add the SDK to your project without signing in to Azure
You no longer have to sign in to Azure to add Application Insights packages to your project, either through the
New Project dialog or from the project context menu. If you do sign in, the SDK will be installed and configured to
send telemetry to the portal as before. If you don’t sign in, the SDK will be added to your project and it will
generate telemetry for the diagnostic hub. You can configure it later if you want.

Device support
At Connect(); 2015, we announced that our mobile developer experience for devices is HockeyApp. HockeyApp
helps you distribute beta builds to your testers, collect and analyze all crashes from your app, and collect feedback
directly from your customers. HockeyApp supports your app on whichever platform you choose to build it,
whether that be iOS, Android, or Windows, or a cross-platform solution like Xamarin, Cordova, or Unity.
In future releases of the Application Insights extension, we’ll introduce a more integrated experience between
HockeyApp and Visual Studio. For now, you can start with HockeyApp by simply adding the NuGet reference. See
the documentation for more information.
Application Insights: Frequently Asked Questions
1/3/2018 • 8 min to read • Edit Online

Configuration problems
I'm having trouble setting up my:
.NET app
Monitoring an already-running app
Azure diagnostics
Java web app
I get no data from my server
Set firewall exceptions
Set up an [Link] server
Set up a Java server

Can I use Application Insights with ...?


Web apps on an IIS server - on-premises or in a VM
Java web apps
[Link] apps
Web apps on Azure
Cloud Services on Azure
App servers running in Docker
Single-page web apps
Sharepoint
Windows desktop app
Other platforms

Is it free?
Yes, for experimental use. In the basic pricing plan, your application can send a certain allowance of data each
month free of charge. The free allowance is large enough to cover development, and publishing an app for a
small number of users. You can set a cap to prevent more than a specified amount of data from being
processed.
Larger volumes of telemetry are charged by the Gb. We provide some tips on how to limit your charges.
The Enterprise plan incurs a charge for each day that each web server node sends telemetry. It is suitable if you
want to use Continuous Export on a large scale.
Read the pricing plan.

How much is it costing?


Open the Features + pricing page in an Application Insights resource. There's a chart of recent usage. You
can set a data volume cap, if you want.
Open the Azure Billing blade to see your bills across all resources.
What does Application Insights modify in my project?
The details depend on the type of project. For a web application:
Adds these files to your project:
[Link].
[Link]
Installs these NuGet packages:
Application Insights API - the core API
Application Insights API for Web Applications - used to send telemetry from the server
Application Insights API for JavaScript Applications - used to send telemetry from the client
The packages include these assemblies:
[Link]
[Link]
Inserts items into:
[Link]
[Link]
(New projects only - if you add Application Insights to an existing project, you have to do this manually.)
Inserts snippets into the client and server code to initialize them with the Application Insights resource ID. For
example, in an MVC app, code is inserted into the master page Views/Shared/_Layout.cshtml

How do I upgrade from older SDK versions?


See the release notes for the SDK appropriate to your type of application.

How can I change which Azure resource my project sends data to?
In Solution Explorer, right-click [Link] and choose Update Application Insights. You can
send the data to an existing or new resource in Azure. The update wizard changes the instrumentation key in
[Link], which determines where the server SDK sends your data. Unless you deselect
"Update all," it will also change the key where it appears in your web pages.

What is Status Monitor?


A desktop app that you can use in your IIS web server to help configure Application Insights in web apps. It
doesn't collect telemetry: you can stop it when you are not configuring an app.
Learn more.

What telemetry is collected by Application Insights?


From server web apps:
HTTP requests
Dependencies. Calls to: SQL Databases; HTTP calls to external services; Azure Cosmos DB, table, blob storage,
and queue.
Exceptions and stack traces.
Performance Counters - If you use Status Monitor, Azure monitoring or the Application Insights collectd
writer.
Custom events and metrics that you code.
Trace logs if you configure the appropriate collector.
From client web pages:
Page view counts
AJAX calls Requests made from a running script.
Page view load data
User and session counts
Authenticated user IDs
From other sources, if you configure them:
Azure diagnostics
Docker containers
Import tables to Analytics
OMS (Log Analytics)
Logstash

Can I filter out or modify some telemetry?


Yes, in the server you can write:
Telemetry Processor to filter or add properties to selected telemetry items before they are sent from your
app.
Telemetry Initializer to add properties to all items of telemetry.
Learn more for [Link] or Java.

How are City, Country and other geo location data calculated?
We look up the IP address (IPv4 or IPv6) of the web client using GeoLite2.
Browser telemetry: We collect the sender's IP address.
Server telemetry: The Application Insights module collects the client IP address. It is not collected if
X-Forwarded-For is set.

You can configure the ClientIpHeaderTelemetryInitializer to take the IP address from a different header. In
some systems, for example, it is moved by a proxy, load balancer, or CDN to X-Originating-IP . Learn more.
You can use Power BI to display your request telemetry on a map.

How long is data retained in the portal? Is it secure?


Take a look at Data Retention and Privacy.

Might personally identifiable information (PII) be sent in the


telemetry?
This is possible if your code sends such data. It can also happen if variables in stack traces include PII. Your
development team should conduct risk assessments to ensure that PII is properly handled. Learn more about
data retention and privacy.
The last octet of the client web address is always set to 0 after ingestion by the portal.

My iKey is visible in my web page source.


This is common practice in monitoring solutions.
It can't be used to steal your data.
It could be used to skew your data or trigger alerts.
We have not heard that any customer has had such problems.
You could:
Use two separate iKeys (separate Application Insights resources), for client and server data. Or
Write a proxy that runs in your server, and have the web client send data through that proxy.

How do I see POST data in Diagnostic search?


We don't log POST data automatically, but you can use a TrackTrace call: put the data in the message parameter.
This has a longer size limit than the limits on string properties, though you can't filter on it.

Should I use single or multiple Application Insights resources?


Use a single resource for all the components or roles in a single business system. Use separate resources for
development, test, and release versions, and for independent applications.
See the discussion here
Example - cloud service with worker and web roles

How do I dynamically change the instrumentation key?


Discussion here
Example - cloud service with worker and web roles

What are the User and Session counts?


The JavaScript SDK sets a user cookie on the web client, to identify returning users, and a session cookie to
group activities.
If there is no client-side script, you can set cookies at the server.
If one real user uses your site in different browsers, or using in-private/incognito browsing, or different
machines, then they will be counted more than once.
To identify a logged-in user across machines and browsers, add a call to setAuthenticatedUserContext().

Have I enabled everything in Application Insights?


WHAT YOU SHOULD SEE HOW TO GET IT WHY YOU WANT IT

Availability charts Web tests Know your web app is up

Server app perf: response times, ... Add Application Insights to your Detect perf issues
project or Install AI Status Monitor on
server (or write your own code to track
dependencies)

Dependency telemetry Install AI Status Monitor on server Diagnose issues with databases or
other external components

Get stack traces from exceptions Insert TrackException calls in your code Detect and diagnose exceptions
(but some are reported automatically)
WHAT YOU SHOULD SEE HOW TO GET IT WHY YOU WANT IT

Search log traces Add a logging adapter Diagnose exceptions, perf issues

Client usage basics: page views, JavaScript initializer in web pages Usage analytics
sessions, ...

Client custom metrics Tracking calls in web pages Enhance user experience

Server custom metrics Tracking calls in server Business intelligence

Why are the counts in Search and Metrics charts unequal?


Sampling reduces the number of telemetry items (requests, custom events, and so on) that are actually sent
from your app to the portal. In Search, you see the number of items actually received. In metric charts that
display a count of events, you see the number of original events that occurred.
Each item that is transmitted carries an itemCount property that shows how many original events that item
represents. To observe sampling in operation, you can run this query in Analytics:

requests | summarize original_events = sum(itemCount), transmitted_events = count()

Automation
Configuring Application Insights
You can write PowerShell scripts using Azure Resource Monitor to:
Create and update Application Insights resources.
Set the pricing plan.
Get the instrumentation key.
Add a metric alert.
Add an availability test.
You can't set up a Metric Explorer report or set up continuous export.
Querying the telemetry
Use the REST API to run Analytics queries.

How can I set an alert on an event?


Azure alerts are only on metrics. Create a custom metric that crosses a value threshold whenever your event
occurs. Then set an alert on the metric. Note that: you'll get a notification whenever the metric crosses the
threshold in either direction; you won't get a notification until the first crossing, no matter whether the initial
value is high or low; there is always a latency of a few minutes.

Are there data transfer charges between an Azure web app and
Application Insights?
If your Azure web app is hosted in a data center where there is an Application Insights collection endpoint,
there is no charge.
If there is no collection endpoint in your host data center, then your app's telemetry will incur Azure outgoing
charges.
This doesn't depend on where your Application Insights resource is hosted. It just depends on the distribution of
our endpoints.

Can I send telemetry to the Application Insights portal?


We recommend you use our SDKs and use the SDK API. There are variants of the SDK for various platforms.
These SDKs handle buffering, compression, throttling, retries, and so on. However, the ingestion schema and
endpoint protocol are public.

Can I monitor an intranet web server?


Here are two methods:
Firewall door
Allow your web server to send telemetry to our endpoints [Link] and
[Link]
Proxy
Route traffic from your server to a gateway on your intranet, by setting this in [Link]:

<TelemetryChannel>
<EndpointAddress>your gateway endpoint</EndpointAddress>
</TelemetryChannel>

Your gateway should route the traffic to [Link]

Can I run Availability web tests on an intranet server?


Our web tests run on points of presence that are distributed around the globe. There are two solutions:
Firewall door - Allow requests to your server from the long and changeable list of web test agents.
Write your own code to send periodic requests to your server from inside your intranet. You could run Visual
Studio web tests for this purpose. The tester could send the results to Application Insights using the
TrackAvailability() API.

More answers
Application Insights forum

Common questions

Powered by AI

A major challenge when using Application Insights in a high-load production environment is managing data volume, which can lead to performance bottlenecks and increased costs due to high telemetry data output. Mitigating this involves configuring sampling to reduce data without losing critical insights, utilizing filters for relevant telemetry, and setting appropriate data retention policies . It's also essential to regularly review and optimize query performance in the Analytics tool to ensure efficient data retrieval . Furthermore, setting up alerts and anomaly detection helps in proactively managing issues that could arise from high load .

Workbooks in Application Insights enhance understanding and reporting by offering an interactive environment to combine data visualizations, analytical queries, and narrative text. This integration allows teams to curate comprehensive reports on application usage and incident insights, tailored to organizational needs . Workbooks facilitate multidisciplinary collaboration by embedding charts, tables, and textual insights within a single document, which can be updated or shared across teams . Furthermore, they simplify the reporting process by allowing users to leverage pre-configured sample queries or create new ones, enabling a flexible approach to monitoring and incident management .

Application Insights can be integrated into DevOps processes to enhance both application development and operations by providing continuous monitoring and feedback loops. It allows developers to identify issues in real-time during the development phase, facilitating rapid iterations and improvements . By integrating with CI/CD pipelines, Application Insights helps in automatically deploying applications and verifying their performance under real-world conditions through telemetry analysis . Additionally, its collaboration with tools like Azure DevOps or GitHub enriches team workflows with instant access to performance metrics, empowering operational teams to address issues proactively and maintain application health .

Custom dashboards in Application Insights allow for centralized monitoring by aggregating various performance indicators and user metrics, acting as a holistic management tool for both application health and usability. By pinning relevant charts, metrics, and alerts, teams can create an operational view tailored to their application's architecture and usage patterns . This customization enables stakeholders to quickly assess critical aspects such as server responsiveness, error rates, and user engagement trends, facilitating timely interventions . Additionally, integrating visuals from other Azure services, like Stream Analytics, provides a comprehensive view, enhancing strategic planning and operational agility .

Developers can customize telemetry in Application Insights by using the Application Insights SDK to track custom events and metrics that align with specific monitoring needs. Strategies include embedding custom data properties and values in telemetry items for further context . Developers can also use the SDK to configure telemetry processors and initializers, modifying or discarding data before it's sent to Azure. This customization can be used to filter specific reports, such as excluding noise from telemetry, or focusing on critical paths like high-value transactions . Additionally, integrating custom logging mechanisms provides in-depth insights with minimal overhead .

Smart Detection alerts in Application Insights are automatic and leverage machine learning to detect anomalies in application performance, such as unexpected failure rates or slow response times, without needing manual configuration . They are advantageous for proactive issue identification and require no setup, offering insights based on normal behavior baselines once established . Conversely, manually configured alerts are set up by specifying thresholds for specific metrics, providing control over the monitoring scope, and allowing customization based on known performance baselines or business requirements . This customization ensures targeted alerts for anticipated issues, complementing Smart Detection by catching predefined conditions .

The Application Insights Analytics query language is crucial for enhancing application diagnostics and insights as it enables complex queries and data analysis capabilities over stored telemetry data. This language allows developers to delve deeper into collected data, facilitating detailed explorations into user behavior, application performance, and operational anomalies . By using this language, developers can build custom dashboards and visualizations tailored to unique diagnostic requirements, thus empowering teams to make data-driven decisions with precision . Its flexibility and power are essential for identifying subtle trends or anomalies that aren't apparent through the default monitoring and alerting features .

Application Insights facilitates understanding of user navigation patterns through the User Flows tool, which visualizes how users move between pages and features of an application. By selecting an initial page view or custom event, it tracks subsequent user actions, showcasing paths followed by users and session endpoints, thereby identifying areas where users drop off or repeat actions . Insights derived include identifying high-exit pages, understanding user engagement with specific features, and discovering patterns indicative of interaction bottlenecks . This information guides enhancements in user experience and UI design to optimize user retention and engagement .

Telemetry correlation in Application Insights involves linking related telemetry items, such as requests, exceptions, and traces, through a shared operation ID or context. This allows for a comprehensive view of how different components of an application interact during an operation, facilitating the identification of performance bottlenecks and root cause analysis . By creating a unique operation context, developers can trace how individual user actions or system operations propagate through the application, enabling granular performance analysis and improving diagnostic capabilities .

Setting up Application Insights for an ASP.NET application hosted on Azure requires a few prerequisites: a Microsoft Azure subscription, Visual Studio 2013 Update 3 or later, and access to the Azure portal . The steps include: adding the Application Insights SDK to the app using Visual Studio, configuring it to send telemetry to an Azure Application Insights resource, and ensuring proper application instrumentation . After setup, developers should deploy the application to Azure, where telemetry can be stored and analyzed in real-time, and use the Azure portal to customize monitoring parameters and alerts to fit the application's needs .

You might also like