Harnessing Microsoft’s Business Application Platform

The Power Platform (now dubbed: Business Application Platform) started life as a collection of three products introduced into the Office 365 portfolio: [Power]Apps for lightweight business applications, [Power]Automate for business process automation, and [Power]BI for reporting & insights. It now has a fourth constituent dubbed [Power]Virtual Agents; a ‘low code’ solution to develop bots (for use in your forefront collaboration solutions like Microsoft Teams).

The platform rolls with a framework for the management of data and information that is shared by Microsoft’s Dynamics 365 service, called the Common Data Service and Common Data Model, respectively. Here’s where you can capture and interact with your data model if you’re not building solutions with a high synergy with SharePoint Online.

The Business Application Platform is a hot property right now, and organisations are looking for opportunities to evaluate and pilot it’s capabilities. I’ve seen a surge in requests for partner support to deliver business solutions powered by the platform.

So why, subject to a case by case evaluation, do I find myself concluding that in some scenarios, the Business Application Platform is not the right solution?

OK. Put the pitchforks down, and hear me out. I’m not a blind evangelist. I think the platform is great but that doesn’t mean it’s right for every scenario.

In this article, I’ll be examining what’s required to make the Business Application Platform a viable option for your organisation, and evaluating it against other comparative enabling technologies.

As a Service

The clue is in the name: Business Application Platform. It’s a platform capability. Is it a good idea to develop solutions for a platform that has not been properly embedded within your organisation?

I’ve seen organizations take the following approaches:

  • They ban/block usage of the Business Application Platform due to security concerns, predominantly around access to and usage of business data. (I realise this is less about the platform, and more a concern that existing security vulnerabilities might be exposed or exploited).
  • They enable the Business Application Platform, but restrict it to usage within a qualified group. This is a temporary situation, mitigating to concerns around who gets to deliver solutions on it, and more importantly, who supports those solutions.
  • They launch the Business Application Platform, perhaps with injected Change Management. Solutions start appearing and there is an implicit expectation around IT support, IT get nervous that they’re not fully across what’s happening out there.

Landfall: The legacy of Excel

The concern over who owns and supports what is nothing new. It was happening 20 years ago with Excel. Consider this scenario:

  • Excel is used in the Accounting team for bookkeeping
  • Alex, from Accounting takes a course in Visual Basic for Applications.
  • They decide to play with Excel and modify the bookkeeping workbook to automate some things.
  • It’s super effective! The rest of the team think it’s awesome. Alex makes more modifications and starts teaching the rest of the group.
  • Fast forward 6 months: what was a basic workbook is now a fully fledged business (critical) application.
  • Morgan, the head of Accounting, is getting nervous. What if Alex moves on? What if there’s a problem with solution and the team can’t fix it?
  • Morgan approaches Jules in IT support, with an expectation that they can support the solution and have the skills to do it….

The keyword here is expectation. And its established as part of a service governance plan:

Rule Number 1: Set expectations with the consumers of your service so they understand roles, responsibilities and accountability. Do this before you deploy the Business Application Platform.

This brings me to landfall. It’s the term I use to describe the process for transitioning technical ownership of a solution from a citizen developer (or business unit) to a formal IT support function. The Business Application platform is targeted at everyone, including business users, and trust me, you want to put it into their hands because you’re giving them tools to solve problems and be productive. In short: you need to define and communicate a process that transitions a solution from the business into IT support as part of your governance plan.

Rule Number 2: Define and communicate a process for landfall in your governance plan

You can design a foundation for the Business Application Platform that meets your requirements for delegation of administration, and in anticipation of transfer of ownership. For example: the creation of additional (logical) environment for IT owned and managed solutions that sits alongside the default sandbox environment that’s created with your tenancy.

Evaluating Solutions for the Business Application Platform

I work with customers to review business requirements and evaluate enabling technology. Often I see solutions masquerading as requirements driven by funding incentives, a need to innovate and adopt new technology and generate a case study. I get it.

There are some gotchas: considerations beyond whether the technology can deliver a solution. There are comparative enabling technologies and key differentiators, even within the Microsoft stack. For example:

Alternatives to PowerApps for presentation and data collection include Microsoft Forms, the SharePoint Framework, Single Page Applications or fully fledged web applications.

Alternatives for Power Automate include Azure Logic Apps (it shares the same foundation as Power Automate) and Azure Functions . You’ve also got commercial off the shelf workflow automation platforms such at Nintex and K2. Consider ongoing use of SharePoint 2010 or 2013 workflow in SharePoint Online a burning platform.

Power Virtual Assistants are an alternative to going bespoke with the Microsoft Bot Framework.

Rule Number 3: Evaluate requirements against comparative technologies with an understanding of key differentiators, dependencies and constraints.

So what are some of the key considerations?

Cost

The licensing model for the Business Application Platform is multi-tiered, so your license determines what’s in your toolbox. It might restrict use of specific connectors to line of business applications, the ability to make a simple HTTP request to a web service, or how a dashboard might be published and shared. Don’t commit to a Business Application Platform solution only to be stung with P2 licensing costs down the line.

Size and Complexity

Business Application Platform solutions are supposed to be lightweight. Like doing the COVID-19 check-in via Teams. Just look at the way you port solutions between environments. Look at the way they are built. Look at the way the platform is accessible from Office 365 applications and services. Large and complex solutions built for the Business Application Platform are arguably as hard to support and maintain as their ‘as code’ counterparts.

Synergy with Development Operations Cadence:

Let’s assume your organisation has an established Development Operations (DevOps) capability, and there’s a process in place for building, testing, and delivering business solutions, and tracking technical change. It may, for example advocate Continuous Integration and Delivery.

Along comes the Business Applications Platform, and a different method to build, deploy and port solutions built on the platform. It’s immediately at odds with your cadence. Good luck with the automation.

Technologies such as Logic Apps may be more suitable, given that solutions are built and deployed as code.

Synergy with Office 365

It’s not a hard constraint, but Business Application Platform solutions are a better fit if there is a high synergy with Office 365 applications and services. The current experience enables business users to build solutions from within the Office 365 ecosystem, and with an pre-defined context (e.g. a SharePoint Document Library).

Solutions that require integration with a broader set of systems may warrant the use of alternative enabling technologies, especially if additional plumbing is required to facilitate that connectivity. Do you break your principals around ‘low code’ solutions if there’s now a suite of Azure Functions on which your Flow is dependent run your business logic?

Ownership

Business users have an appreciation for a solution delivery lifecycle, but they’re not developers. The Business Application Platform is designed to empower them and is comes with tools required to design, build, test and publish solutions. Your decision to use the Business Application Platform informed by a strategy to have business users own and maintain their solutions. You get the foundations right in terms of security and governance, they’re not breaking the rules.

Maturity

Is the Business Application Platform an established service in your enterprise? If you’re leaning on partner support to crank out those solutions for you, are you ready to support and maintain them?

Low Code?

I see the term ‘no code’ or ‘low code’ everywhere. You don’t need developers! Anyone can do it! It’s cheaper.

Here’s a fact: It’s possible to build a monstrosity of a ‘low code’ solution and it’s possible to take a lot of time do to it. Try building a complex UI in PowerApps. Go on, try.

I prefer the term no-hassle instead. The Business Applications Platform is ready to use and all the technical bits are there. All you need is the license, and the skills. Keep it small and simple.

You want the ‘no hassle’ benefit to apply to service owners, administrators and consumers alike. There’s a balance here and decisions impacting one group may be to the detriment of the others.

Rule Number 4: Reach a consensus on the definition for ‘Low Code’ and what needs to be in place to realise the benefits


In summary, the Business Application Platform is a game changer but it needs to be delivered as a platform capability. The solution that’s going to net your your case study is the first of many that will follow, but not all solutions are right for the platform. Hopefully this article provides you with some pointers around how you evaluate the platform as a potential enabling technology; it’s a culmination of what I’ve learned.

Bots: An Understanding of Time

Some modern applications must understand time, because the messages they receive contain time sensitive information. Consider a modern Service Desk solution, that may have to retrieve tickets based on a date range (the span between dates) or a duration of time.

In this blog post, I’ll explain how bots can interpret date ranges and durations, so they can respond to natural language queries provided by users, either via keyboard or microphone.

First, let’s consider  the building blocks of a bot, as depicted in the following view:

The client runs an application that sends messages to a messaging endpoint in the cloud. The connection between the client and the endpoint is called a channel. The message is basically something typed or spoken by the user.

Now, the bot must handle the message and provide a response. The challenge here is interpreting what the user said or typed. This is where cognitive services come in.

A cognitive service is trained to take an message from the user and resolve it into an intent (a function the bot can then execute). The intent determines which function the bot will execute, and the resulting response to the user.

To build time/date intelligence into a bot, the cognitive service must be configured to recognise date/time sensitive information in messages, and the bot itself must be able to convert this information into data it can use to query data sources.

Step 1: The Cognitive Service

In this example, I’ll be using the LIUS cognitive service. Because my bot resides in an Australia based Azure tenant, I’ll be using the https://au.luis.ai endpoint. I’ve created an app called Service Desk App.

Next, I need to build some Intents and Entities and train LUIS.

  • An Entity is an thing or phrase (or set of things or phrases) that may occur in in an utterance. I want LUIS (and subsequently the bot) to identify such entities in message provided to it.

The good news is that LUIS has a prebuilt entity called datetimeV2 so let’s add that to our Service Desk App. You may also want to add additional entities, for example: a list of applications managed by your service desk (and their synonyms), or perhaps resolver groups.

Next, we’ll need an Intent so that LUIS can have the bot execute the correct function (i.e. provide a response appropriate to the message). Let’s create an Intent called List.Tickets.

  • An Intent, or intention represents something the user wants to do (in this case, retrieve tickets from the service desk). A bot may be designed to handle more than one Intent. Each Intent is mapped to a function/method the bot executes.

I’ll need to provide some example utterances that LUIS can associate with the List.Tickets intent. These utterances must contain key words or phrases that LUIS can recognise as entities. I’ll use two examples:

  • “Show me tickets lodged for Skype in the last 10 weeks”
  • “List tickets raised for SharePoint  after July this year”

Now, assuming I’ve created an list based entity called Application (so LUIS knows that Skype and SharePoint are Applications), LUIS will recognise these terms as entities in the utterances I’ve provided:

Now I can train LUIS and test some additional utterances. As a general rule, the more utterances you provide, the smarter LUIS gets when resolving a message provided by a user to an intent. Here’s an example:

Here, I’ve provided a message that is a variation of utterances provided to LUIS, but it is enough for LUIS to resolve it to the List.Tickets intent. 0.84 is a measure of certainty – not a percentage, and it’s weighted against all other intents. You can see from the example that LUIS has correctly identified the Application (“skype”), and the measure of time  (“last week”).

Finally, I publish the Service Desk App. It’s now ready to receive messages relayed from the bot.

Step 2: The Bot

Now, it’s possible to create a bot from the Azure Portal, which will automate many of the steps for you. During this process, you can use the Language Understanding template to create a bot with a built in LUISRecognizer, so the code will be generated for you.

  • Recognizer is a component (class) of the bot that is responsible determining intent. The LUISRecognizer does this by relaying the message to the LUIS cognitive service.

Let’s take a look at the bot’s handler for the List.Tickets intent. I’m using Node.js here.

The function that handles the List.Tickets intent uses the EntityRecognizer class and findEntity method to extract entities identified by LUIS and returned in the payload (results).

It passes these values to a function called getData . In this example, I’m going to have my bot call a (fictional) remote service at http://xxxxx.azurewebsites.net/Tickets. This service will support the Open Data (OData) Protocol, allowing me to query data using the query string. Here’s the code:

(note I am using the sync-request package to call the REST service synchronously).

Step 3: Chrono

So let’s assume we’ve sent the following message to the bot:

  • “List tickets raised for SharePoint  after July this year”

It’s possible to query an OData data source for date based information using syntax as follows:

  • $filter=CreatedDate gt datetime’2018-03-08T12:00:00′ and CreatedDate lt datetime’2018-07-08T12:00:00′

So we need to be able to convert ‘after July this year’ to something we can use in an OData query string.

Enter chrono-node and dateformat – neat packages that can extract date information from natural language statements and convert the resulting date into ISO UTC format respectively. Let’s put them both to use in this example:

It’s important to note that chrono-node will ignore some information provided by LUIS (in this case the word ‘after’, but also ‘last’ and ‘before’), so we need a function to process additional information to create the appropriate filter for the OData query:


Handling time sensitive information is a crucial when building modern applications designed to handle natural language queries. After all, wouldn’t it be great to ask for information using your voice, Cortana,  and your mobile device when on the move! For now, these modern apps will be dependent on data in older systems with APIs that require dates or date ranges in a particular format.

The beauty of languages like Node.js and the npm package manager is that building these applications becomes an exercise in assembling building blocks as opposed to writing functionality from scratch.

Getting Started with Adaptive Cards and the Bot Framework

This article will provide an introduction to working with AdaptiveCards and the Bot Framework. AdaptiveCards provide bot developers with an option to create their own card templates to suit variety of different scenarios. I’ll also show you a couple of tricks with Node.js that will help you design smart.

Before I run through the example, I want to point you to some great resources from adaptivecards.io which will help you build and test your own AdaptiveCards:

  • The schema explorer provides a breakdown of the constructs you can use to build your AdaptiveCards. Note that there are limitations to the schema so don’t expect to do all the things you can do with regular mark-up..
  • The schema visualizer is a great tool to enable you (and your stakeholders) to give the cards a test drive.

There are many great examples online (start with GitHub), so you can go wild with your own designs.

In this example, we’re going to use an AdaptiveCard to display an ‘About’ card for our bot. Schemas for AdaptiveCards are JSON payloads. Here’s the schema for the card.

This generates the following card (go play in the visualizer):

We’ve got lots of %placeholders% for information the bot will insert at runtime. This information could be sourced, for example, from a configuration file collocated with the bot, or from a service the bot has to invoke.

Next, we need to define the components that will play a role in populating our About card. My examples here will use node.js. The following simple view outlines what we need to create in our Visual Studio Code workspace:

The about.json file contains the schema for the AdaptiveCard (which is the code in the script block above). I like to create a folder called ‘cards’ in my workspace and store the schemas for each AdaptiveCard there.

The Source Data

I’m going to use dotenv to store the values we need to plug into our AdaptiveCard at runtime. It’s basically a config file (.env) that sits with your bot. Here we declare the values we want inserted into the AdaptiveCard at runtime:

This is fine for the example here but in reality you’ll probably be hitting remote services for records and parsing returned JSON payloads, rendering carousels of cards.

The Class

about.js is the object representation of the card. It provides attributes for each item of source data and a method to generate a card schema for our bot. Here we go:

The constructor simply offloads incoming arguments to class properties. The toCard() method reads the about.json schema and recursively does a find/replace job on the class properties. A card is created and the updated schema is assigned to the card’s content property. The contentType attribute in the JSON payload tells a handling function that the schema represents an AdaptiveCard.

The Bot

In our bot we have a series of event handlers that trigger based on input from the user via the communication app, or from a cognitive service, which distils input from the user into an intent.

For this example, let’s assume that we have an intent called Show.Help. Utterances from the user such as ‘tell me about yourself’ or quite simply ‘help’ might resolve to this intent.

So we need to add a handler (function) in app.js that responds to the Show.Help intent (this is called a triggerAction). The handler deals with the dialog (interaction) between the user and the bot so we need it to both generate the About card and handle any interactions the card supports (such as clicking the Submit Feedback button on the card).

Note that the dialog between user and bot ends when the endDialog function is called, or when the conditions of the cancelAction are met.

Here’s the code for the handler:

The function starts with a check to see if a dialog is in session (i.e. a message was received). If not (the else condition), it’s a new dialog.

We instantiate an instance of the About class and use the toCard() method to generate a card to add to the message the bot sends back to the channel. So you end up with this:


And there you have it. There are many AdaptiveCard examples online but I couldn’t find any for Node.js that covered the manipulation of cards at runtime. Now, go forth and build fantastic user experiences for your customers!

5 Tips: Designing Better Bots

Around about now many of you will be in discussions internally or with your partners on chatbots and their applications.

The design process for any bot distils a business process and associated outcome into a dialog. This is a series of interactions between the user and the bot where information is exchanged. The bot must deliver that outcome expediently, seeking clarifications where necessary.

I’ve been involved in many workshops with customers to elicit and evaluate business processes that could be improved through the use of bots. I like to advocate a low risk, cost effective and expedient proof of concept, prior to a commitment to full scale development of a solution. Show, rather than tell, if you will.

With that in mind, I present to you my list of five house rules or principles to consider when deciding if a bot can help improve a business process:

1. A bot can’t access information that is not available to your organisation

Many bots start out life as a proof of concept, or an experiment. Time and resources will be limited at first. You want to prove the concept expediently and with agility. You’ll want to avoid blowing the scope in order to stand up new data sources or staging areas for data.

As you elaborate on the requirements, as yourself where the data is coming from and how it is currently aggregated or modified in order to satisfy the use case. Your initial prototype may well be constrained by the data sources currently in place within your organisation (and accessibility to those data sources).

Ask the question “Where is this information at rest?”, “How do you access it?”, “It is manually modified?”.

2. Don’t ask the for information the user doesn’t know or has to go and look up 

Think carefully – does the bot really need to seek clarification? Let’s consider the following example:

ken_10

In practice, you’re forcing the user to sign in here to some system or dig around their inbox and copy/paste a unique identifier. I’ve yet to meet anyone who has the capacity to memorise things like their service ticket reference numbers. You can design smarter. For example:

  1. Assume the user is looking for the last record they created (you can use your existing data to determine if this is likely)
  2. Show them their records. Get them to pick one.
  3. Use the dialog flow to retain the context of a specific record

By all means, have the bot accommodate for scenarios where user does provide a reference number. Remember, your goal is to reduce time to the business outcome and eliminate menial activity. (Trust me. Looking up stuff in one system to use in another system is menial activity).

3. Let’s be clear – generic internet keyword searches are an exception

ken_9

When Siri says ‘here’s what I found on the Internet’, it’s catching an exception; a fall-back option because it’s not been able to field your query. It’s far better than ‘sorry, I can’t help you’. A generic internet/intranet keyword search should never be your primary use case. Search and discovery activity is key to a bot’s value proposition, but these functions should be underpinned by a service fabric that targets (and aggregates) specific organisational data sources. You need to search the internet? Please, go use Bing or Google.

4. Small result sets please

As soon as a chat-bot has to render more than 5 records in one response, I consider this an auto-fail.

Challenge any assertion that a user would want to see a list of more than 5 results, and de-couple the need to summarise data from the need to access individual records. Your bot needs to respond quickly, so avoid expensive queries for data and large resulting data sets that need to be cached somewhere. For example:

ken_12.PNG

In this example, the bot provides a summary which enough information to provide the user with an option to take further action (do you want to escalate this?). It also informs the most appropriate criteria for the next user driven search/discovery action (reports that are waiting on me).

Result sets may return tens, hundreds or thousands of records but the user inevitably narrows this down to one, so the question is “how do you get from X results down to 1”.

Work with the design principal that the bot should apply a set of criteria that returns the ‘X most likely’. Use default criteria based on the most common filtering scenario but allow the user to re-define that criteria. For example:

ken_8

5. Don’t remove the people element

Remember a bot should eliminate the menial work a person does, not substitute the person. If you’re thinking of designing a bot to substitute or impersonate a person, think again.

ken_11.PNG

No one wants to do menial work, and I’d hedge my bets that there is no one in your organisation who’s workload is 100% menial. Those that do menial work would much rather re-focus efforts on more productive and rewarding endeavours.

Adding a few Easter Eggs to your bot (i.e. human-like responses) is a nice to have. Discovery and resulting word of mouth can assist with adoption.

Consider whether your process involves the need to connect with a person (either as a primary use case, or edge case). If this is the case, ensure you can expedite the process. Don’t simply serve up a contact number. Instead, consider how to connect the user directly (via tools like Skype, if they are ‘online’). Alternatively, allow the user request a call-back.

ken_13.PNG


Remember, there are requirements bots and agents cannot satisfy. Define some core principles or house rules with your prospective product owners and test their ideas against them. It can lead you to high value business solutions.