Harnessing Microsoft’s Business Application Platform

The Power Platform (now dubbed: Business Application Platform) started life as a collection of three products introduced into the Office 365 portfolio: [Power]Apps for lightweight business applications, [Power]Automate for business process automation, and [Power]BI for reporting & insights. It now has a fourth constituent dubbed [Power]Virtual Agents; a ‘low code’ solution to develop bots (for use in your forefront collaboration solutions like Microsoft Teams).

The platform rolls with a framework for the management of data and information that is shared by Microsoft’s Dynamics 365 service, called the Common Data Service and Common Data Model, respectively. Here’s where you can capture and interact with your data model if you’re not building solutions with a high synergy with SharePoint Online.

The Business Application Platform is a hot property right now, and organisations are looking for opportunities to evaluate and pilot it’s capabilities. I’ve seen a surge in requests for partner support to deliver business solutions powered by the platform.

So why, subject to a case by case evaluation, do I find myself concluding that in some scenarios, the Business Application Platform is not the right solution?

OK. Put the pitchforks down, and hear me out. I’m not a blind evangelist. I think the platform is great but that doesn’t mean it’s right for every scenario.

In this article, I’ll be examining what’s required to make the Business Application Platform a viable option for your organisation, and evaluating it against other comparative enabling technologies.

As a Service

The clue is in the name: Business Application Platform. It’s a platform capability. Is it a good idea to develop solutions for a platform that has not been properly embedded within your organisation?

I’ve seen organizations take the following approaches:

  • They ban/block usage of the Business Application Platform due to security concerns, predominantly around access to and usage of business data. (I realise this is less about the platform, and more a concern that existing security vulnerabilities might be exposed or exploited).
  • They enable the Business Application Platform, but restrict it to usage within a qualified group. This is a temporary situation, mitigating to concerns around who gets to deliver solutions on it, and more importantly, who supports those solutions.
  • They launch the Business Application Platform, perhaps with injected Change Management. Solutions start appearing and there is an implicit expectation around IT support, IT get nervous that they’re not fully across what’s happening out there.

Landfall: The legacy of Excel

The concern over who owns and supports what is nothing new. It was happening 20 years ago with Excel. Consider this scenario:

  • Excel is used in the Accounting team for bookkeeping
  • Alex, from Accounting takes a course in Visual Basic for Applications.
  • They decide to play with Excel and modify the bookkeeping workbook to automate some things.
  • It’s super effective! The rest of the team think it’s awesome. Alex makes more modifications and starts teaching the rest of the group.
  • Fast forward 6 months: what was a basic workbook is now a fully fledged business (critical) application.
  • Morgan, the head of Accounting, is getting nervous. What if Alex moves on? What if there’s a problem with solution and the team can’t fix it?
  • Morgan approaches Jules in IT support, with an expectation that they can support the solution and have the skills to do it….

The keyword here is expectation. And its established as part of a service governance plan:

Rule Number 1: Set expectations with the consumers of your service so they understand roles, responsibilities and accountability. Do this before you deploy the Business Application Platform.

This brings me to landfall. It’s the term I use to describe the process for transitioning technical ownership of a solution from a citizen developer (or business unit) to a formal IT support function. The Business Application platform is targeted at everyone, including business users, and trust me, you want to put it into their hands because you’re giving them tools to solve problems and be productive. In short: you need to define and communicate a process that transitions a solution from the business into IT support as part of your governance plan.

Rule Number 2: Define and communicate a process for landfall in your governance plan

You can design a foundation for the Business Application Platform that meets your requirements for delegation of administration, and in anticipation of transfer of ownership. For example: the creation of additional (logical) environment for IT owned and managed solutions that sits alongside the default sandbox environment that’s created with your tenancy.

Evaluating Solutions for the Business Application Platform

I work with customers to review business requirements and evaluate enabling technology. Often I see solutions masquerading as requirements driven by funding incentives, a need to innovate and adopt new technology and generate a case study. I get it.

There are some gotchas: considerations beyond whether the technology can deliver a solution. There are comparative enabling technologies and key differentiators, even within the Microsoft stack. For example:

Alternatives to PowerApps for presentation and data collection include Microsoft Forms, the SharePoint Framework, Single Page Applications or fully fledged web applications.

Alternatives for Power Automate include Azure Logic Apps (it shares the same foundation as Power Automate) and Azure Functions . You’ve also got commercial off the shelf workflow automation platforms such at Nintex and K2. Consider ongoing use of SharePoint 2010 or 2013 workflow in SharePoint Online a burning platform.

Power Virtual Assistants are an alternative to going bespoke with the Microsoft Bot Framework.

Rule Number 3: Evaluate requirements against comparative technologies with an understanding of key differentiators, dependencies and constraints.

So what are some of the key considerations?

Cost

The licensing model for the Business Application Platform is multi-tiered, so your license determines what’s in your toolbox. It might restrict use of specific connectors to line of business applications, the ability to make a simple HTTP request to a web service, or how a dashboard might be published and shared. Don’t commit to a Business Application Platform solution only to be stung with P2 licensing costs down the line.

Size and Complexity

Business Application Platform solutions are supposed to be lightweight. Like doing the COVID-19 check-in via Teams. Just look at the way you port solutions between environments. Look at the way they are built. Look at the way the platform is accessible from Office 365 applications and services. Large and complex solutions built for the Business Application Platform are arguably as hard to support and maintain as their ‘as code’ counterparts.

Synergy with Development Operations Cadence:

Let’s assume your organisation has an established Development Operations (DevOps) capability, and there’s a process in place for building, testing, and delivering business solutions, and tracking technical change. It may, for example advocate Continuous Integration and Delivery.

Along comes the Business Applications Platform, and a different method to build, deploy and port solutions built on the platform. It’s immediately at odds with your cadence. Good luck with the automation.

Technologies such as Logic Apps may be more suitable, given that solutions are built and deployed as code.

Synergy with Office 365

It’s not a hard constraint, but Business Application Platform solutions are a better fit if there is a high synergy with Office 365 applications and services. The current experience enables business users to build solutions from within the Office 365 ecosystem, and with an pre-defined context (e.g. a SharePoint Document Library).

Solutions that require integration with a broader set of systems may warrant the use of alternative enabling technologies, especially if additional plumbing is required to facilitate that connectivity. Do you break your principals around ‘low code’ solutions if there’s now a suite of Azure Functions on which your Flow is dependent run your business logic?

Ownership

Business users have an appreciation for a solution delivery lifecycle, but they’re not developers. The Business Application Platform is designed to empower them and is comes with tools required to design, build, test and publish solutions. Your decision to use the Business Application Platform informed by a strategy to have business users own and maintain their solutions. You get the foundations right in terms of security and governance, they’re not breaking the rules.

Maturity

Is the Business Application Platform an established service in your enterprise? If you’re leaning on partner support to crank out those solutions for you, are you ready to support and maintain them?

Low Code?

I see the term ‘no code’ or ‘low code’ everywhere. You don’t need developers! Anyone can do it! It’s cheaper.

Here’s a fact: It’s possible to build a monstrosity of a ‘low code’ solution and it’s possible to take a lot of time do to it. Try building a complex UI in PowerApps. Go on, try.

I prefer the term no-hassle instead. The Business Applications Platform is ready to use and all the technical bits are there. All you need is the license, and the skills. Keep it small and simple.

You want the ‘no hassle’ benefit to apply to service owners, administrators and consumers alike. There’s a balance here and decisions impacting one group may be to the detriment of the others.

Rule Number 4: Reach a consensus on the definition for ‘Low Code’ and what needs to be in place to realise the benefits


In summary, the Business Application Platform is a game changer but it needs to be delivered as a platform capability. The solution that’s going to net your your case study is the first of many that will follow, but not all solutions are right for the platform. Hopefully this article provides you with some pointers around how you evaluate the platform as a potential enabling technology; it’s a culmination of what I’ve learned.

Placing a Yammer Network into Forced Retirement

People are close to their social networks, and most see the network as representative of their brand and culture. If you kill the network, you’re perceivably killing a culture.

However, sometimes this needs to happen. I recall a recent scenario. Effectively an acquisition and merger, my mandate was to roll three disparate Yammer networks into one. Requests to collaborate here not there and incentives weren’t cutting it. People we holding on to what they felt was their heritage.

Two of these networks had to die to ensure the third thrived.

It’s a process I lovingly call strangulation.

It’s an appropriate metaphor. Social networks thrive on interactivity. To kill the network, you have to force a reduction in that interactivity; figuratively starve it of oxygen.

How do you do that? The key is in providing a strong enough incentive (and visual cues) to move the herd.

Can you just ‘Disable’ Yammer?

There is a documented process to turn Yammer off.

In our case, we wanted to ensure ongoing access to content in the network for a period of time leading up to tenant decommissioning. We also wanted to ensure users could log into the network and ‘springboard’ into other networks in which they were ‘guests’. Outright killing the network was not an option.

Note the following:

  • You cannot manually add or remove members from the All Company group (but you can restrict new conversations to admins only).
  • You cannot prevent people accessing Yammer or creating groups whilst they remain licensed.
  • Verified network administrators cannot access private messages and groups unless Private Content Mode is enabled (consult with your legal team before you enable this mode).
  • You can’t use a tool to migrate groups conversations to another Yammer network. Give users time to migrate supporting assets such as Files and Pinned Items (links) themselves.

The Importance of Change Management

This process enables ongoing visibility of content, until such time licenses are revoked. Retirement directly impacts the network in the following ways:

  • (non verified) Network and Group administrators are demoted, becoming regular community members in the network
  • Yammer groups are removed from the search scope. Only existing members retain access.

Migration Note: If users are transitioning to another network/tenant they are likely assuming a new identity. Any connections to heritage content (things they’ve posted, liked, people they followed) are severed in the transition.

This article is not about organisational change management. It’s about what you can technically do with the Yammer network to facilitate it’s retirement. However, I will state that you’ll only succeed with this process if you provide clear messaging leading up to, during and post retirement activity.

Pre-Retirement – It’s Time to Switch!
  • Explain that you’re retiring the network and the reasons why
  • Explain key terms such as ‘Deletion’ and ‘Archival’
  • If you can, publish an inventory of groups and nominated Owners (resolve scenarios where a group has no clear ownership, or there are many listed group admins).
  • Set our timelines for the retirement
  • Clearly set our what you expect Owners to do
  • Clearly set out what’s going to happen if Owners do nothing
  • Provide options (and support collateral) to help users move groups and content to other networks (if applicable)
  • Invite people to be proactive and relocate/re-establish Yammer groups (or outright delete them if they are no longer relevant).
  • Provide a support channel for the transformation.

Preparation

The Yammer Custodian

I recommend creating a Yammer Custodian generic user account to run the retirement. The account stays with the network after retirement and can be used by an admin to access content or manage settings for the network long after licenses for the user community have been revoked. The custodian has the following role:

  • It assumes a role as a verified network admin moving forward.
  • It assumes default ownership of any groups archived during the retirement
  • It is used to make announcements during retirement at network or group level (members have the option to ‘follow’ the custodian to keep informed of what’s happening).

Once you’ve created the Yammer Custodian account, head on over to your network’s Admin settings and make it a Verified Admin.

Accessing Private Content

By default, private groups remain inaccessible to Verified Admins. They have to request access like everyone else.

You have the option to set the network’s content mode to Private (Network Admin > Content Mode). This will enable the Yammer Custodian to access (and archive) groups marked as private in the Yammer network.

Consult with your legal team prior to enabling this mode. Alternatively, you can have the Yammer Custodian request permission and/or ignore private groups during the retirement.

Data Retention

By default, deleted content disappears from user view, but it will be retained in the database for 30 days,

You have the option to set the Data Retention Policy (Network Admin > Data Retention). Changing the setting to Archive will ensure anything deleted during retirement is retained for reporting & analysis.

Note: It is possible for content to be hard delved via the GDPR workflow or via an API call, even if the data retention policy is set to Archive.

Archiving a Group

It’d a good strategy to start with the groups with lowest member count first (low risk first).

You should perform the following tasks (signed in as a verified network administrator) in order to archive a Yammer group:

  • Add the Yammer Custodian to the group and promote it to admin

Perform the following tasks as the Yammer Custodian:

  • Revoke the admin role from each other group administrator (the Yammer Custodian should be the only group admin.
  • Append the term ‘(Archived)’ on the end of the group Name
  • If the group is public, switch it to private. If you do this, it will no longer be accessible to non-members but you’ll prevent anyone new from joining and/or posting. Existing members retain access.

(Removing groups from the directory and search results will prevent them from popping up in the Discover Groups Yammer feature).

  • Finally, post a message in the group to indicate it is archived:

Note: I recommend posting as an Update instead of an Announcement. An announcement generates email to all group members. Given you can archive groups quickly, you’ll spam them with messages from the Yammer Custodian. Impacted users with high resistance will consider this equivalent to a ‘knife through the heart’. Announcements should be by exception.

Network Configuration

With all the Groups (with the exception of All Company) archived, it’s time to make some changes at the network level:

  • Switch to the All Company group and open Settings. Append ‘(Archived)’ on the end of the group Name and set the Posting Permissions to Restricted.
  • In Network Admin > Configuration > Basics, set the Message Prompt to some kind of deterrent. Be creative!
  • In Network Admin > Configuration > File Upload Permissions, set the File Upload Permissions to block all files.

There you have it. A process to retire (not outright kill) your Yammer network. The process documented here is mature, having been employed in support of a large scale tenant migration. It proved highly effective, but not without supporting organisational change management efforts.

Generating Document Metadata using the Power Platform (part 2)

AI Builder

(This is part two of a two-part article covering AI Builder and the Power Platform. I recommend you skim through part one for some much needed context).

In part one of this two-part series we created an AI Builder model using the Form Processing model type and trained it to extract and set the values for selected fields in documents of a specific type (in my case: a statement of work).

We now have the blueprint for a Content Type for use in SharePoint Online:

schema

In this article (part two of the series), we’ll be creating a Flow using Power Automate. It will use our AI Builder model to extract and store metadata for statements of work uploaded to a SharePoint Online.

(I’ll continue to use the statement or work as an example throughout the series. Feel free to substitute it and it’s selected fields, for something more applicable in your own organisation).

Prerequisites

Before we create the Flow, we’ll need to perform the following actions (the Site Owner role will give you sufficient rights to complete this work). I don’t want to go into detail here but I’ve linked to supporting articles if you’re new to SharePoint Online.

With the prerequisite setup complete, our Document Library settings should look something like this:

With that done, we’re ready to build our Flow!

Design

Before I build a Flow using Power Automate, I like to sketch a high level design to capture any decision logic and get a better feel of what the workflow has to do. We can worry about how these steps work when we implement. Here’s what I want the Flow to do:

There are three sub-processes in the Flow. The trigger will be the creation of a document in our Document Library (manual or otherwise). If that document isn’t in pdf format, we’ll need to convert it. This is because AI Builder’s Form Processing model does not support documents in native (Microsoft) Office format. Finally, we’ll need to send the document off to AI Builder so it can be analysed.

Creating the Flow

OK, let’s get building our Flow. The solution is technically a ‘no-code’ solution but we’re going to create some expressions to handle things like token substitution. Think: Excel, not Visual Studio Code.

Creating a Flow is easy. Simply head on over to flow.microsoft.com, sign in and hit + Create. Our Flow will kick-in when someone uploads a document to SharePoint Online, so select the Automated flow template:

The first building block of an automated flow is the trigger. Select (or search for) the trigger called When an item is created or modified and hit Create.

Configuring the trigger is easy. Simply pick your Site Address from the list provided, and specify the List Name (set a Custom Value if Power Automate is having a hard time resolving your List Name, as it did frequently for me). Click + New Step when ready.

Handling Variables

Next, we need to create some variables to store values we’ll need to reference along the way. The Initialize Variable action here to help, so we’ll create one for each variable we need.

  • Initialize a variable called FolderPath (type: string) and set it’s value using the following expression:
substring(triggerOutputs()?['body/{Path}'],0,add(length (triggerOutputs()?['body/{Path}']),-1))

(this is removing the trailing ‘/’ from the relative path to the Document by using the List Item’s native Path property. It sucks but we need pass this value to a web service later in the Flow and it just breaks if you keep that trailing ‘/’ there. )

  • Initialise a variable called PDFUrl (type: string) and leave it’s Value blank for now (this will store a reference to the converted PDF we create later in the Flow).
  • Initialize a variable called PDFContent (type: object) and leave it’s Value blank for now (this will store the content to be passed to AI Builder for conversion later in the Flow).

At this point in the process, your Flow should look like this:

(Note: You can rename your triggers and actions to help with readability, as shown in the image above).

Decision Logic

Next, we need another action to perform a check to see if our document is in pdf format. Add another action and search for: Condition. We can use the File Name property in Dynamic content (created by our trigger) and simply check the last three characters in that value (i.e. the file extension).

At this point our Flow looks like this and must split into two. The ‘yes’ branch (in cases where the document is is pdf), and the ‘no’ branch (anything else). We’ll handle the ‘yes’ branch first as it’s the easiest.

The ‘Yes’ Path

We’ll need to create a reference to the content of the document and pass that the AI Builder for analysis. The good news is that if the document is already in pdf format, we can use the Get file content using path action and then assign the value from that to the PDF Content variable we created earlier in the Flow:

  • Add a Get file content using path action to the Yes branch of the Flow. Set the File Path to the Full Path property (under dynamic content) that was created by our trigger.
  • Next, add a Set Variable action. We’re going to assign the File Content property (under dynamic content) created by the previous action to the PDF Content variable we initialized earlier.

Still with me? Your setup for the ‘yes’ branch’ should look like this:

The ‘No’ Path

It gets a little tough here, but stick with me. If the document is not in pdf format, we must convert it.

There are a number of options here, including the use of third party actions you can purchase to handle conversion for you. But there is a way to use SharePoint Online to convert the document for you. For this solution I’ve drawn from prescriptive guidance published by Paul Culmsee (@paulculmsee) to help with my solution. Full credit to him for figuring this stuff out!

Conversion works like this:

  • We issue a request to SharePoint Online for the document payload (including valuable meta-data)
  • We parse the resulting JSON so that the meta-data we need is cached by Flow as dynamic content
  • We assemble the URL to the converted pdf taking bits from the JSON payload.
  • We issue a HTTP request for the converted pdf and store the body of the response in our PDFContent variable.

OK, let’s go:

  • Add a Send a HTTP request to SharePoint action to the No branch of the Condition. The Uri we want to invoke is as follows:
_api/web/lists/GetbyTitle('<LIST_NAME>')/RenderListDataAsStream?FilterField1=ID&FilterValue1=<DOCUMENT_ID>

(We need to swap out the <LIST NAME> with the name of your Document Library and the <DOCUMENT ID> with the ID property of the document that was uploaded there. Fortunately the ID property is available as dynamic content).

The parameters to this request must contain instructions to return a Uri to the pdf version of the document, so we include the following in the body of our request:

{ 
    "parameters": {
       "RenderOptions" : 4103,
       "FolderServerRelativeUrl" : "/sites/sandpit/@{variables('FolderPath')}"
    }
}

(here we need to send the FolderPath with trailing ‘/’ removed, so we’re using the FolderPath variable we set at the start of the Flow).

So your action configuration should look similar to this:

Next, we need a copy of the payload schema returned by this service call so we can reference it as dynamic content in our Flow. The easiest way to do this is to Test the Flow and copy the schema. Hit the Test button and select I’ll perform the trigger action and hit Save and Test.

Flow will wait patiently for you to upload a document to your Library to trigger the Flow. Once done, steps in your flow will be ticked off as they are tested. Examine the Send a HTTP to SharePoint action and copy the body content to your clipboard (we’ll use this to create the next action)

  • Switch back to edit mode and add a Parse JSON action. We’re going to assign the body property (under dynamic content) created by the previous action to the Content property. To set the schema property, select Generate from sample and paste the body content you copied to your clipboard when you ran the test.

Your action configuration should look similar to this:

We need to parse the output of the HTTP request because we’re using key values in the payload to create the Url to the converted PDF in our next step.

  • Add a Set Variable action. We’re going to use the following expression to assemble our Url:
@{body('Parse_JSON')?['ListSchema']?['.mediaBaseUrl']}/transform/pdf?provider=spo&inputFormat=@{first(body('Parse_JSON')?['ListData']?['Row'])?['File_x0020_Type']}&cs=@{body('Parse_JSON')?['ListSchema']?['.callerStack']}&docid=@{first(body('Parse_JSON')?['ListData']?['Row'])?['.spItemUrl']}&@{body('Parse_JSON')?['ListSchema']?['.driveAccessToken']}

Your action should look like this:

  • The .mediaBaseUrl property contains the URL to the platform’s media content delivery service (e.g. australiaeast1-mediap.svc.ms)
  • The first expression parses the file type, to pass the format to convert from (e.g. docx)
  • The second expression parses the uri to the pre-converted document in SharePoint Online.
  • The .callerStack and .driveAccessToken properties are encoded strings (tokens) providing useful session context.

(If you want, Test the Flow at this stage. Copy the output of the Set PDF Url action into your browser. If it’s correct, it will display a PDF version of the document you submitted at the start of the test).

  • Next, we need the Flow to request the converted pdf directly. Add a HTTP action to your Flow. (Astonishingly, this action requires a premium license). Assign the PDFUrl variable to the URI property.

Your action should look like this:

  • The last action in your ‘no’ branch will assign the body of the response generated by the HTTP action (as dynamic content) to the PDFContent variable we initialized at the start of the Flow.

Your action should look like this:

Quick Recap

At this stage, your Flow has two branches based on a test to see if the document format is of type pdf. Each branch ultimately set’s the PDFContent variable we’ll use for the final step. In the yes branch, the content is a snapshot of the document uploaded to SharePoint Online. In the no branch, it’s a snapshot of a pdf conversion of that document. Here’s what it should look like:

Calling AI Builder

The final step of the flow occurs when the two branches converge. We’ll invoke the AI Builder model we created in part one of this article and then update the document’s List Item once we have the meta-data.

  • Search for an action called Process and save information from forms and add it to your Flow where the conditional branches converge. Select your model from the list provided. Use the PDFContent variable (as dynamic content) here.

Your action should look like this:

  • Next, we need to take the output of the analysis and use it to update the column data for the statement of work content type in SharePoint Online. Add an Update Item (SharePoint) action to the Flow. Once you specify the List Name the fields will be loaded in for you. Simply use the dynamic content panel to assign values returned from AI Builder to your columns.

Your action should look like this:

As a stretch goal, you can add some resilience to your Flow by adding some conditional logic when you update your list item. AI Builder passes it’s confidence scores back to Flow, so you could, for example, update the list item only if the confidence score is within a specific threshold. For example:

The Final Solution

Your final solution should look something similar to this:

As statement of work documents are uploaded to the Document Library, Flow will pick them up and push the content to your model in AI Builder for analysis. The result is a complete set of metadata for each document without the need to manual intervention.

Final things to note:

  • Processing is asynchronous and will take a few seconds (especially if conversion is needed), so the meta-data will not be available immediately after the document is uploaded. My Flow completed on average in about 15-25 seconds. Your mileage may vary.

Wow. That was quite a lot. Thanks for sticking with me. In this series, we trained and published a model using AI Builder and then created a Flow using Power Automate to invoke it.

The solution is able to analyse documents at they are uploaded to SharePoint Online. Column values are extracted from the document as AI Builder runs the analysis. The Flow updates the associated List Item, assigning values to our Columns.

Generating Document Metadata using the Power Platform (part 1)

AI Builder

I hate filling in forms. Really, I do. Imagine if you had to fill in a form each time you uploaded a document into SharePoint Online? You do? I feel for you.

For some time I’ve been wanting to look at how my organisation can use tools such AI Builder to help auto-generate metadata for types of documents we store and manage in SharePoint Online.

At the time of writing, AI Builder is about 12 months old following it’s preview, and it’s been aligned to the Power Platform. The solution includes a form processing AI model, which can be trained to extract named properties from your documents.

I wanted users to be able to search, sort and filter on key pieces of information contained in our statements of work (contracts). The logical solution was to create columns to store this information, given the content is hosted in SharePoint Online. From there, I could convert these columns into managed properties, for use in search refiners and as referenceable values in other solutions.

This is part one of a two-part series designed to walk you through the steps to develop a no-code solution to analyse documents and automatically extract and set associated metadata. (you can find part two here). Our solution will use the following building blocks:

  • SharePoint Online
  • AI Builder
  • Power Automate

From a licensing perspective, you’ll need a Power Automate P2 (Premium) to re-create my solution and an E1 licence or better to configure the SharePoint Online components.

This article (part one) covers the AI Builder component. Part 2 covers the SharePoint Online / Power Automate component.

Create your Model

You can access AI Builder via PowerApps or Power Automate. Here you can select a pre-fabricated model and train it. The models you train and publish are stored centrally and visible to all your Power Platform solutions so it doesn’t matter how you access AI Builder.

We’re going to use the Form Processing model and train it to extract information from our statement of work documents.

Some important things to note:

  • At the time of writing, AI builder can only analyse content in pdf, png, jpg and jpeg format (don’t worry, our solution will handle conversion from native Office document formats).
  • Start with 5 examples of the document you want the solution to analyse. Ensure they contains examples of the meta-data you want to extract. This is because the field selection process targets a single document and you don’t get to choose which one if you start with many documents.
  • For a reliable model, upload at least 5 other training documents after field selection. Manual effort is needed to teach AI Builder how to analyse each document. Put these in the same location so you can bulk upload them in one action.
  • It’s important to introduce some variety here (in my case, there were several key variations of our statement of work I wanted AI Builder to recognise). A set of completely different, unrelated documents will skew your model.

Upload

First, convert any training documents to pdf format if they are in a native Office document format.

You’ll be prompted to upload your documents. Remember you can add more training documents later if you like.

Once you’ve uploaded your documents, hit that Analyse button to give AI Builder an initial look.

Review

Now you’ll have the opportunity to identify the metadata you want to extract from similar documents. AI Builder may take the initiative and create some for you. In any case, keep what you want. You can come back after saving the model to repeat this process.

AI Builder will present an example document to you for review. Simply highlight the content you wish the model to extract from the document and create an associated field name. As mentioned earlier, you can to make sure you can find all your fields in this document.

When you’re done with the document, hit that Confirm Fields button and you’re ready to review the rest of your payload.

This review is both manual and sequential, one document at a time. Your progress is tracked as you go. You’ll need to identify and assign values to each field you created.

(In cases where a field does not occur in one of your documents, or it’s value is not set, then you can select the Field not in document option).

Note that if you add more documents to the model, you’ll need to run through this process for each new document you added. You won’t have to repeat the process for documents already reviewed unless you add, modify or remove a selected field.

At the end of the review process, you’ll be able to click the Done button. On clicking Next you’ll be presented with a model summary.

Training

The final step before testing is to Train your model. This will save it and the training will happen in the background. Keep an eye on the Status. It will read Trained when training is complete. You’re now ready to test your model before publishing it.

Testing

So here I have my model that I have lovingly called SOW_Processing . I needed 20 documents to fully train my solution. There are 12 selected fields my model will extract during analysis. Your own end state will of course vary.

Listing the field names here is handy, as we’ll need a Content Type to represent a statement of work and Columns to hold this metadata in our SharePoint Online solution. We’ll need a blueprint for the SharePoint Online setup. Let’s go with this:

Here we’re testing the reliability of the model. It’s not fit for purpose if, for example, it can’t identify correct values for the selected fields in a statement of work presented to it.

The Quick test allows us to see how well a model can identify the correct metadata in documents we present to it for analysis. Make sure you use new documents (not used to train the model) for testing.

Upload a document and have the model analyse it. On scanning the document, you’ll find parts of it highlighted, indicating that the model has found a value for one of it’s selected fields.

The Confidence score is important. This represents the degree of certainty the model has that the value for the selected field is correct.

Some important things to note:

  • Training is an iterative process. You’re supposed to re-train your model over time, introducing new variations to it.
  • Set a minimum confidence threshold for each selected field as part of your acceptance criteria for the model.
  • Scores between 80 and 100 are good. You cannot guarantee a confidence score of 100 even if you train your model extensively, since new variations can always be presented to it.
  • Scores below 80 introduce risk and below 50 are indicative of an unreliable model. An unreliable model is, in my opinion, not fit for purpose (since you can’t trust the data). The solution here is to train your model using your test documents (by uploading them to the model and re-training it). This will teach your model to recognise such variations in future tests.
  • You can use the confidence score in applications (such as Flows or Apps) as part of your validation logic! (more on that in part 2).

Publication

So, let’s assume you’ve run some Quick tests, and you’re model is identifying values for selected fields with a confidence score that is within the threshold set in your acceptance criteria. Great! You can now publish it by selecting Publish.

Your model is ready for use.


Now that we’ve created an AI Builder model to extract selected fields from our statement of work documents, we can develop a Power Automate solution to extract and set the metadata for these documents when they get uploaded to SharePoint Online!

Head on over to part two where we’ll cover the setup and configuration of the workflow (Power Automate) component of the solution.

The Yammer Roast

The Yammer Roast

Taking my inspiration from Comedy Central, the Yammer Roast is a forum in which we can directly address resistances around Yammer, its role, and past failures in retrospect.

Some of my clients have tried with Yammer and concluded that for various reasons it’s failed to take hold. For some the value is clear and it’s a case of putting a compelling approach and supporting rationale to sponsors and consumers who remain sceptical. For others, they are looking for a way to make it work in their current collaboration landscape.

The Yammer roast is designed to tease out, recognise and address key resistances. It’s not an opportunity to blindly evangelise Yammer; it’s an exercise in consulting to provide some clarification around Yammer as a business solution, and what’s needed for a successful implementation.

In this article, I’ll cover some of the popular resistances aired at Yammer Roasts, why these resistances exist and how you can address them. If you’re an advocate for social networking in your own organisation, my hope is that this can inform your own discussion.

  1. We have concerns over impropriate usage, distraction from proper work

There’s a perception that Yammer is a form of distraction and employees will be off posting nonsense on Yammer instead of doing proper work. Even worse, they may be conducting themselves inappropriately.

A self-sustaining Yammer network has to find that balance between [non-work stuff] and [work stuff], and it needs an element of both to be successful. Informal, social contributions beget more meaningful work contributions.

Consider what is perceived as informal, non-work-stuff to be valuable. That person who just posted a cat picture? They are adopting your platform. As are those people who liked or commented on it. Consider the value to the organisation if people are connecting with each other and forming new relationships, outside of the confines of an organisational hierarchy.

Assume your employees know how to conduct themselves and can practice good netiquette. They signed a contract of employment which includes clauses pertaining to code of conduct. Perhaps refer to that in your terms of use.

Establish a core principal that no contribution should be discouraged. It really doesn’t matter where content in Yammer is generated, and any one person’s view of the content is informed by who they follow, the groups they subscribe to and the popularity of content. Uninteresting, irrelevant content is quickly hidden over time.  “But what if someone puts a cat picture in the All Company feed?” So what? What if the CEO likes it? Consider creating a foundational set of groups to ensure that on day one there’s more than just the All Company feed.

Strike that balance between work-stuff and non-work stuff.  Set an objective for your community manager (yes, a formal responsibility!) to help combat potential stage fright; there are numerous incentives and initiatives that can come into play here.  Accept the fact that your social network will, and should, grow organically.

  1. We’ve got Yammer and no-one is using it.

…but your partners and vendors are and they’re looking to collaborate with you.

Stagnant networks; a common scenario. Your organisation may be looking at alternative platforms as a way to reset/relaunch. Here, you lament the lack of tangible, measurable business outcomes at the outset of the initial rollout or the lack of investment in change management activities to help drive adoption of the platform.

You’ll smile and nod sagely, and perhaps talk to a view similar to the following:

But, for whatever reason, you’re here. So how can past experiences inform future activities?

Whether you use Yammer or not, the success of your social network in its infancy is dependent on measurable business outcomes. Without the right supporting campaign, a way to track adoption and a way to draw insight from usage, you effectively roll the dice with simply ‘turning it on’. Initiatives around Yammer can start small with a goal of communicating the success (of a process) and subsequently widening its application within your business.

Simply swapping out the technology without thinking about the business outcome may renew interest from sponsors who’ve lost faith in the current product, but you risk a rinse and repeat scenario.

“But we’re dependent on executive sponsorship!” I hear you lament. This is a by-product of early boilerplate change campaigns, where success somehow rested on executives jumping in to lead by example. Don’t get me wrong, it’s great when this happens. From my perspective, you need any group within your business with a value use case and the willingness to try. You have O365, the technology is there.

You can consider the Yammer client to not just be a portal into your network, but the networks of your vendors and partners. Access to your partner/vendor product teams (via Yammer External Networks) and being able to leverage subject matter expertise from them and the wider community is a compelling case in the absence of an internal use case.

Combatting any negative perceptions of your social network following a failure to launch is all about your messaging, and putting Yammer’s capability into a wider context, which leads me to…

  1. But we’re using Teams, Slack, Jabber, Facebook for Workplace (delete as appropriate)

Feature parity – it can be a head scratcher. “But we can collaborate in Skype. And Teams! And Yammer! And via text! What will our users do?” Enterprise architects will be advocating the current strategic platform in the absence of a differentiator, or exception. Your managed services team will be flagging additional training needs. There will be additional overheads.

If you’re there to champion Yammer in the face of an incumbent (or competing) solution, you need to adopt the tried and tested approach which is 1. Identify the differentiator and align the new technology (i.e. Yammer) to it, 2. Quantify the investment, and 3. Outline the return on investment.

As a consultant my first conversations are always focused around the role Yammer will play in your organisation’s collaboration landscape. The objective is to ensure initial messaging about Yammer will provide the required clarity and context.

This reminds me of an engagement some time ago; an organisation with a frontline workforce off the radar forming working groups in Facebook. “We aren’t across what’s going on. We need to bring them over to Yammer.” Objective noted, but consider the fact that a) these users have established their networks and their personal brand, b) they are collaborating in the knowledge that big brother isn’t watching. Therefore, there’s no way in hell they’ll simply jump ship. The solution? What can you provide that this current solution cannot? Perhaps the commitment to listen, respond and enact change.

The modern digital workplace is about choice and choice is okay. Enable your users to make that informed decision and do what is right for their working groups.

  1. It’s another app. There’s an overhead to informing and educating our business.

Of course there is. This is more around uncertainty as to the strategy for informing and educating your business. Working out the ‘what’s in it for me?’ element.

There is a cost to getting Yammer into the hands of your workforce. For example, from a technical perspective, you need to provide everyone with the mobile app (MAM scenarios included) and help users overcome initial sign-in difficulties (MFA scenarios included). Whatever this may cost in your organisation, your business case needs to provide a justification (i.e.) return on that investment.

Campaign activities to drive adoption are dependent on the formal appointment of a Community Manager (in larger organisations), and a clear understanding around moderation. So you do need to create that service description and governance plan.

I like to paint a picture representing the end state – characteristics of a mature, self-sustaining social network. In this scenario, the Yammer icon sits next to Twitter, Instagram, Facebook on the mobile device. You’re a click away from your colleagues and their antics. You get the same dopamine rush on getting an alert. It’s click bait.  God forbid, you’re actually checking Yammer during the ad-break, or just before bed time. Hang on, your employee just pointed someone in the right direction, or answered a question. Wait a second! That’s voluntarily working outside of regular hours! Without pay!

  1. Yammer? Didn’t that die out a few years ago?

You’ve got people who remember Yammer back in the days before it was a Microsoft product. Yammer was out there. You needed a separate login for Yammer. There were collaboration features built into Microsoft’s SharePoint platform but they sucked in comparison, and rather than invest in building competitive, comparative features into their own fledgling collaboration solution, Microsoft acquired Yammer instead.

Roll out a few months, and there’s the option to swap out social/newsfeed features in SharePoint for those in Yammer, via the best possible integration at the time (which was essentially link replacement).

Today, with Office 365, there’s more integration. Yammer has supported O365 sign-in for a couple of years now. Yammer functions are popping up in other O365 workloads. A good example is the Talk about this in Yammer function in Delve, which then frames the resulting conversation from Yammer within the Delve UI: From an end user experience perspective there is little difference between Yammer now and the product it was pre-Microsoft acquisition, but the product has undergone significant changes (external groups and networks for example). Expect ongoing efforts to tighten integration with the rest of the O365 suite, understand and address the implications of cutting-off that functionality.

The Outcome

Yammer (or your social networking platform of choice) becomes successful when it demonstrates a high value role in driving your organisation’s collaborative and social culture. In terms of maturity we’re taking self-sustaining, beyond efforts to drive usage and lead by example.

Your social network is an outlet for everyone in your organisation. People new to your organisation, will see it as a reflection of your collaborative and social culture; give them a way to connect with people and immediately contribute in their own way.

It can be challenging to create such an outlet where the traditional hierarchy is flattened, where everyone has a voice (no matter who they are and where they sit within the organisation). Allowing online personalities to develop without reluctance and other constraints (“if it’s public, it’s fair game!”) will be the catalyst to generating the relationships, knowledge, insight (and resulting developments) that will improve your business.

Bots: An Understanding of Time

Some modern applications must understand time, because the messages they receive contain time sensitive information. Consider a modern Service Desk solution, that may have to retrieve tickets based on a date range (the span between dates) or a duration of time.

In this blog post, I’ll explain how bots can interpret date ranges and durations, so they can respond to natural language queries provided by users, either via keyboard or microphone.

First, let’s consider  the building blocks of a bot, as depicted in the following view:

The client runs an application that sends messages to a messaging endpoint in the cloud. The connection between the client and the endpoint is called a channel. The message is basically something typed or spoken by the user.

Now, the bot must handle the message and provide a response. The challenge here is interpreting what the user said or typed. This is where cognitive services come in.

A cognitive service is trained to take an message from the user and resolve it into an intent (a function the bot can then execute). The intent determines which function the bot will execute, and the resulting response to the user.

To build time/date intelligence into a bot, the cognitive service must be configured to recognise date/time sensitive information in messages, and the bot itself must be able to convert this information into data it can use to query data sources.

Step 1: The Cognitive Service

In this example, I’ll be using the LIUS cognitive service. Because my bot resides in an Australia based Azure tenant, I’ll be using the https://au.luis.ai endpoint. I’ve created an app called Service Desk App.

Next, I need to build some Intents and Entities and train LUIS.

  • An Entity is an thing or phrase (or set of things or phrases) that may occur in in an utterance. I want LUIS (and subsequently the bot) to identify such entities in message provided to it.

The good news is that LUIS has a prebuilt entity called datetimeV2 so let’s add that to our Service Desk App. You may also want to add additional entities, for example: a list of applications managed by your service desk (and their synonyms), or perhaps resolver groups.

Next, we’ll need an Intent so that LUIS can have the bot execute the correct function (i.e. provide a response appropriate to the message). Let’s create an Intent called List.Tickets.

  • An Intent, or intention represents something the user wants to do (in this case, retrieve tickets from the service desk). A bot may be designed to handle more than one Intent. Each Intent is mapped to a function/method the bot executes.

I’ll need to provide some example utterances that LUIS can associate with the List.Tickets intent. These utterances must contain key words or phrases that LUIS can recognise as entities. I’ll use two examples:

  • “Show me tickets lodged for Skype in the last 10 weeks”
  • “List tickets raised for SharePoint  after July this year”

Now, assuming I’ve created an list based entity called Application (so LUIS knows that Skype and SharePoint are Applications), LUIS will recognise these terms as entities in the utterances I’ve provided:

Now I can train LUIS and test some additional utterances. As a general rule, the more utterances you provide, the smarter LUIS gets when resolving a message provided by a user to an intent. Here’s an example:

Here, I’ve provided a message that is a variation of utterances provided to LUIS, but it is enough for LUIS to resolve it to the List.Tickets intent. 0.84 is a measure of certainty – not a percentage, and it’s weighted against all other intents. You can see from the example that LUIS has correctly identified the Application (“skype”), and the measure of time  (“last week”).

Finally, I publish the Service Desk App. It’s now ready to receive messages relayed from the bot.

Step 2: The Bot

Now, it’s possible to create a bot from the Azure Portal, which will automate many of the steps for you. During this process, you can use the Language Understanding template to create a bot with a built in LUISRecognizer, so the code will be generated for you.

  • Recognizer is a component (class) of the bot that is responsible determining intent. The LUISRecognizer does this by relaying the message to the LUIS cognitive service.

Let’s take a look at the bot’s handler for the List.Tickets intent. I’m using Node.js here.

The function that handles the List.Tickets intent uses the EntityRecognizer class and findEntity method to extract entities identified by LUIS and returned in the payload (results).

It passes these values to a function called getData . In this example, I’m going to have my bot call a (fictional) remote service at http://xxxxx.azurewebsites.net/Tickets. This service will support the Open Data (OData) Protocol, allowing me to query data using the query string. Here’s the code:

(note I am using the sync-request package to call the REST service synchronously).

Step 3: Chrono

So let’s assume we’ve sent the following message to the bot:

  • “List tickets raised for SharePoint  after July this year”

It’s possible to query an OData data source for date based information using syntax as follows:

  • $filter=CreatedDate gt datetime’2018-03-08T12:00:00′ and CreatedDate lt datetime’2018-07-08T12:00:00′

So we need to be able to convert ‘after July this year’ to something we can use in an OData query string.

Enter chrono-node and dateformat – neat packages that can extract date information from natural language statements and convert the resulting date into ISO UTC format respectively. Let’s put them both to use in this example:

It’s important to note that chrono-node will ignore some information provided by LUIS (in this case the word ‘after’, but also ‘last’ and ‘before’), so we need a function to process additional information to create the appropriate filter for the OData query:


Handling time sensitive information is a crucial when building modern applications designed to handle natural language queries. After all, wouldn’t it be great to ask for information using your voice, Cortana,  and your mobile device when on the move! For now, these modern apps will be dependent on data in older systems with APIs that require dates or date ranges in a particular format.

The beauty of languages like Node.js and the npm package manager is that building these applications becomes an exercise in assembling building blocks as opposed to writing functionality from scratch.

Getting Started with Adaptive Cards and the Bot Framework

This article will provide an introduction to working with AdaptiveCards and the Bot Framework. AdaptiveCards provide bot developers with an option to create their own card templates to suit variety of different scenarios. I’ll also show you a couple of tricks with Node.js that will help you design smart.

Before I run through the example, I want to point you to some great resources from adaptivecards.io which will help you build and test your own AdaptiveCards:

  • The schema explorer provides a breakdown of the constructs you can use to build your AdaptiveCards. Note that there are limitations to the schema so don’t expect to do all the things you can do with regular mark-up..
  • The schema visualizer is a great tool to enable you (and your stakeholders) to give the cards a test drive.

There are many great examples online (start with GitHub), so you can go wild with your own designs.

In this example, we’re going to use an AdaptiveCard to display an ‘About’ card for our bot. Schemas for AdaptiveCards are JSON payloads. Here’s the schema for the card.

This generates the following card (go play in the visualizer):

We’ve got lots of %placeholders% for information the bot will insert at runtime. This information could be sourced, for example, from a configuration file collocated with the bot, or from a service the bot has to invoke.

Next, we need to define the components that will play a role in populating our About card. My examples here will use node.js. The following simple view outlines what we need to create in our Visual Studio Code workspace:

The about.json file contains the schema for the AdaptiveCard (which is the code in the script block above). I like to create a folder called ‘cards’ in my workspace and store the schemas for each AdaptiveCard there.

The Source Data

I’m going to use dotenv to store the values we need to plug into our AdaptiveCard at runtime. It’s basically a config file (.env) that sits with your bot. Here we declare the values we want inserted into the AdaptiveCard at runtime:

This is fine for the example here but in reality you’ll probably be hitting remote services for records and parsing returned JSON payloads, rendering carousels of cards.

The Class

about.js is the object representation of the card. It provides attributes for each item of source data and a method to generate a card schema for our bot. Here we go:

The constructor simply offloads incoming arguments to class properties. The toCard() method reads the about.json schema and recursively does a find/replace job on the class properties. A card is created and the updated schema is assigned to the card’s content property. The contentType attribute in the JSON payload tells a handling function that the schema represents an AdaptiveCard.

The Bot

In our bot we have a series of event handlers that trigger based on input from the user via the communication app, or from a cognitive service, which distils input from the user into an intent.

For this example, let’s assume that we have an intent called Show.Help. Utterances from the user such as ‘tell me about yourself’ or quite simply ‘help’ might resolve to this intent.

So we need to add a handler (function) in app.js that responds to the Show.Help intent (this is called a triggerAction). The handler deals with the dialog (interaction) between the user and the bot so we need it to both generate the About card and handle any interactions the card supports (such as clicking the Submit Feedback button on the card).

Note that the dialog between user and bot ends when the endDialog function is called, or when the conditions of the cancelAction are met.

Here’s the code for the handler:

The function starts with a check to see if a dialog is in session (i.e. a message was received). If not (the else condition), it’s a new dialog.

We instantiate an instance of the About class and use the toCard() method to generate a card to add to the message the bot sends back to the channel. So you end up with this:


And there you have it. There are many AdaptiveCard examples online but I couldn’t find any for Node.js that covered the manipulation of cards at runtime. Now, go forth and build fantastic user experiences for your customers!

Your Modern Collaboration Landscape

There are many ways people collaborate within your organisation. You may or may not enjoy the fruits of that collaboration. Does your current collaboration landscape cater for the wide variety of groups that form (organically or inorganically) to build relationships and develop your business?

Moving to the cloud is a catalyst for re-evaluating your collaboration solutions and their value. Platforms like Office 365 are underpinned by search/discovery tools that can traverse and help draw insight from the output of collaboration, including conversations and connections between people and information. Modern applications open up new opportunities to build working groups that include people form outside your organisation with whom you can freely, and securely share content.

I’ve been in many discussions with customers on how enabling technologies play a role in the modern collaborative landscape. Part of this discussion is about identifying the various group archetypes and how characteristics can align or differ. I’ve developed a view that forms these groups into three ‘tiers’, as follows:

Organisations should consider a solution for each tier, because there are requirements in each tier that are distinct. The challenge for an organisation (as part of a wider Digital Workplace strategy) is to:

  • Understand how existing and prospective solutions will meet collaboration requirements in each tier, and communicate that understanding.
  • Develop a platform where information managed in each tier can be shared with other tiers.

Let’s go into the three tiers in more detail.

Tier One (Intranet)

Most organisations I work with have an established Tier One business solution, like a corporate intranet. These are the first to mature. They are logically represented as hierarchy of containers (i.e. sites), with a mix of implicit and explicit access control (and associated auditing difficulties). The principal use is to store documents and host authored web content (such as news). Tier One systems are usually dependent on solutions in other tiers to facilitate (and retain) group conversations or discussions. 

  • Working groups are hierarchical and long term, based off a need to model the relationships between groups in an organisation (e.g. Payroll sits under Finance, Auditing sits under Payroll )
  • Activity here is closed and formal. Contribution is restricted to smaller groups.
  • Information is one-way and top down. Content is authored and published for group-wide or organisation-wide consumption.
  • To get things done, users will be more dependent on a Service Desk (for example: managing access control, provisioning new containers), at the cost of agility.
  • Groups are established here to work towards a business outcome or goal (deliver a project, achieve our organisations objectives for 2019).

Tier Three (Social Network):

Tier Three business solutions represent your organisation’s social network. Maturity here ranges from “We launched [insert platform here] and no-one is using it ” to “We’ve seen an explosion in adoption and it’s Giphy city . They are usually dependent on solutions in other tiers to provide capabilities such as web content/document management (case in point: O365 Groups and Yammer).

  • Tier Three groups here are flattened, and cannot by design model a hierarchy. They tend to be long term, and prone to stagnation.
  • Groups represent communities, capabilities and similar interest groups, all of which are of value to your organisation. At this point you say: “I understand how the ‘Welcome to Yammer’ group is valuable, but what about the ‘Love Island Therapy’ group?”. At this point I say: “Here you have a collection of individuals who are proactively using and adopting your platform”.
  • Unlike in the other tiers, groups here tend to have no business outcome, although they’ll have objectives to gain momentum and visibility.
  • Collaboration here is open (public) and informal, down to the #topics people discuss and the language that is used.
  • A good Tier Three solution will be fully self service, subject to a pre-defined usage policy. There should be no restrictions beyond group level moderation in terms of who can contribute. If it’s public or green it’s fair game!
  • Tier Three groups have the biggest membership, and can support thousands of members.

Tier Two (Workspaces)

Tier Two comes last, because in my experience it’s the capability that is the least developed in organisations I work with and the last to mature.

A Tier Two business solution delivers a collaborative area for teams such as working groups, committees and project teams. They will provide a combination of features inherent in Tier One and Tier Three solutions. For example, the chat/discussion capabilities of a Tier Three solution and the content management capabilities of a Tier One solution

  • Tier Two groups here are flattened, and cannot by design model a hierarchy. They tend to me short term, in place to support a timeboxed initiative or activity.
  • Groups represent working groups, committees and project teams, with a need to create content and converse. These groups are coalitions, including representation from different organisational groups that need to come together to deliver an outcome.
  • Groups work towards a business outcome, for example: develop a business case, deliver a document.
  • Collaboration here tends to be closed (restricted to a small group) and semi-formal, but it is possible for such groups to be both closed, formal and open, informal.
  • A good Tier Two solution will be fully self service, subject to a pre-defined usage policy. There should be no restrictions beyond group level moderation in terms of who can contribute.
  • Groups represent a small number of individuals, and do not grow to the size of departmental (Tier One) groups or social (Tier Three) groups.

The three-tiers view identifies the different ways collaboration happens with in your organisation. It is solution agnostic, you can advocate any technology in any tier if it meets the requirement. The view helps evaluate the diverse needs of your organisation, and determine how effective your current solutions are at meeting requirements for collaboration and information working.

Agile Teams to High Performing Machines

Agile teams are often under scrutiny as they find their feet and as their sponsors and stakeholders realign expectations. Teams can struggle due to many reasons. I won’t list them here, you’ll find many root causes online and may have a few yourself.

Accordingly, this article is for Scrum Masters, Delivery Managers or Project Managers who may work to help turn struggling teams into high performing machines. The key to success here is measures, measures and measures. 

I have a technique I use to performance manage agile teams involving specific Key Performance Indicators (KPIs). To date it’s worked rather well. My overall approach is as follows:

  • Present the KPIs to the team and rationalise them. Ensure you have the team buy-in.
  • Have the team initially set targets against each KPI. It’s OK to be conservative. Goals should be achievable in the current state and subsequently improved upon.
  • Each sprint, issue a mid-sprint report, detailing how the team is tracking against KPIs. Use On Target and Warning, respectively to indicate where the team has to up it’s game.
  • Provide a KPI de-brief as part of the retrospective. Provide insight into why and KPIs were not satisfied.
  • Work with the team on setting the KPIs for the next sprint at the retrospective

I use a total of five KPIs, as follows:

  • Total team hours worked (logged) in scrum
  • Total [business] value delivered vs projected
  • Estimation variance (accuracy in estimation)
  • Scope vs Baseline (effectiveness in managing workload/scope)
  • Micro-Velocity (business value the team can generate in one hour)

I’ve provided a Agile Team Performance Tracker for you to use that tracks some of the data required to use these measures. Here’s an example dashboard you can build using the tracker (click to enlarge):

In this article, I’d like to cover some of these measures in detail, including how tracking these measures can start to affect positive change in team performance. These measures have served me well and help to provide clarity to those involved.

Estimation Variance

Estimation variance is a measure I use to track estimation efficiency over time. It relies on the team providing hours based estimates for work items but is attributable to your points based estimates. As a team matures and gets used to estimation. I expect the time invested to more accurately reflect what was estimated.

I define this KPI as a +/-X% value.

So for example, if the current Estimation Variance is +/-20%, it means the target for team hourly estimates, on average for work items in this sprint, should be tracking no more than 20% above or below logged hours for those work items. I calculate the estimation variance as follows:

[estimation variance] = ( ([estimated time] / [actual time]) / [estimated time] ) x 100

If the value is less than the negative threshold, it means the team is under-estimating. If the value is more than the positive threshold, it means the team is over-estimating. Either way, if you’re outside the threshold, it’s bad news.

“But why is over-estimating an issue?” you may ask yourself? “An estimate is just an estimate. The team can simply move more items from the backlog into the sprint“. Remember that estimates are used as baselining for future estimates and planning activities. A lack of discipline in this area may impede release dates for your epics.

You can use this measure against each of the points tiers your team uses. For example:

In this example, the team is under-estimating bigger ticket items (5’s and 8’s), so targeted efforts can be made next estimation session to bring this within target threshold. Overall though in this example the team is tracking pretty well – the overall variance of -4.30% could well be within target KPI for this sprint.

Scope vs Baseline

Scope vs Baseline is a measure used to assess the team’s effectiveness at managing scope. Let’s consider the following 9-day sprint burndown:

The baseline represents the blue line. This is the projected burn-down based on the scope locked in at the start of the sprint. The scope is the orange line, representing the total scope yet to be delivered on each day of the sprint.

Obviously, a strong team tracks against or below the baseline, and will take on additional scope to stay aligned to the baseline without falling too far below it. Teams that overcommit/underdeliver will ‘flatline (not burn down) and track above the baseline, and even worse may increase scope when tracking above the baseline.

The Scope vs Baseline measure is tracked daily, with KPI calculation an average across all days in the sprint.

I define this KPI as a +/-X% value.

So for example, if the current Scope vs Baseline is +/-10%, it means the actual should not track on average more than 10% above or below the baseline. I calculate the estimation variance as follows:

[scope vs baseline] = ( [actual / projected] * 100 ) – 100

Here’s an example based on the burndown chart above:

The variance column stores the value for the daily calculation. The result is the Scope vs Baseline KPI (+4.89%). We see the value ramp up into the positive towards the end of sprint, representing our team’s challenge closing out it’s last work items. We also see the team tracking at -60% below the baseline on day 5, which subsequently triggers a scope increase to track against the baseline – a behaviour indicative of a good performing team.

Micro-Velocity

Velocity is the most well known measure. If it goes up, the team is well oiled and delivering business value. If it goes down, it’s the by-product of team attrition, communication breakdowns or other distractions.

Velocity is a relative measure, so whether it’s good or bad depends on the team and the measures taken in past sprints.

What I do is create a variation on the velocity measure that is defined as follows:

[micro velocity] = SUM([points done]) / SUM([hours worked])

I use a daily calculation of macro-velocity (vs past iterations) to determine the impact team attrition and on-boarding new users will have within a single sprint.


In conclusion, using some measures as KPIs on top of (but dependent on) the reports provided by the likes of Jira and Visual Studio Online can really help a team to make informed decisions on how to continuously improve. Hopefully some of these measures may be useful to you.

5 Tips: Designing Better Bots

Around about now many of you will be in discussions internally or with your partners on chatbots and their applications.

The design process for any bot distils a business process and associated outcome into a dialog. This is a series of interactions between the user and the bot where information is exchanged. The bot must deliver that outcome expediently, seeking clarifications where necessary.

I’ve been involved in many workshops with customers to elicit and evaluate business processes that could be improved through the use of bots. I like to advocate a low risk, cost effective and expedient proof of concept, prior to a commitment to full scale development of a solution. Show, rather than tell, if you will.

With that in mind, I present to you my list of five house rules or principles to consider when deciding if a bot can help improve a business process:

1. A bot can’t access information that is not available to your organisation

Many bots start out life as a proof of concept, or an experiment. Time and resources will be limited at first. You want to prove the concept expediently and with agility. You’ll want to avoid blowing the scope in order to stand up new data sources or staging areas for data.

As you elaborate on the requirements, as yourself where the data is coming from and how it is currently aggregated or modified in order to satisfy the use case. Your initial prototype may well be constrained by the data sources currently in place within your organisation (and accessibility to those data sources).

Ask the question “Where is this information at rest?”, “How do you access it?”, “It is manually modified?”.

2. Don’t ask the for information the user doesn’t know or has to go and look up 

Think carefully – does the bot really need to seek clarification? Let’s consider the following example:

ken_10

In practice, you’re forcing the user to sign in here to some system or dig around their inbox and copy/paste a unique identifier. I’ve yet to meet anyone who has the capacity to memorise things like their service ticket reference numbers. You can design smarter. For example:

  1. Assume the user is looking for the last record they created (you can use your existing data to determine if this is likely)
  2. Show them their records. Get them to pick one.
  3. Use the dialog flow to retain the context of a specific record

By all means, have the bot accommodate for scenarios where user does provide a reference number. Remember, your goal is to reduce time to the business outcome and eliminate menial activity. (Trust me. Looking up stuff in one system to use in another system is menial activity).

3. Let’s be clear – generic internet keyword searches are an exception

ken_9

When Siri says ‘here’s what I found on the Internet’, it’s catching an exception; a fall-back option because it’s not been able to field your query. It’s far better than ‘sorry, I can’t help you’. A generic internet/intranet keyword search should never be your primary use case. Search and discovery activity is key to a bot’s value proposition, but these functions should be underpinned by a service fabric that targets (and aggregates) specific organisational data sources. You need to search the internet? Please, go use Bing or Google.

4. Small result sets please

As soon as a chat-bot has to render more than 5 records in one response, I consider this an auto-fail.

Challenge any assertion that a user would want to see a list of more than 5 results, and de-couple the need to summarise data from the need to access individual records. Your bot needs to respond quickly, so avoid expensive queries for data and large resulting data sets that need to be cached somewhere. For example:

ken_12.PNG

In this example, the bot provides a summary which enough information to provide the user with an option to take further action (do you want to escalate this?). It also informs the most appropriate criteria for the next user driven search/discovery action (reports that are waiting on me).

Result sets may return tens, hundreds or thousands of records but the user inevitably narrows this down to one, so the question is “how do you get from X results down to 1”.

Work with the design principal that the bot should apply a set of criteria that returns the ‘X most likely’. Use default criteria based on the most common filtering scenario but allow the user to re-define that criteria. For example:

ken_8

5. Don’t remove the people element

Remember a bot should eliminate the menial work a person does, not substitute the person. If you’re thinking of designing a bot to substitute or impersonate a person, think again.

ken_11.PNG

No one wants to do menial work, and I’d hedge my bets that there is no one in your organisation who’s workload is 100% menial. Those that do menial work would much rather re-focus efforts on more productive and rewarding endeavours.

Adding a few Easter Eggs to your bot (i.e. human-like responses) is a nice to have. Discovery and resulting word of mouth can assist with adoption.

Consider whether your process involves the need to connect with a person (either as a primary use case, or edge case). If this is the case, ensure you can expedite the process. Don’t simply serve up a contact number. Instead, consider how to connect the user directly (via tools like Skype, if they are ‘online’). Alternatively, allow the user request a call-back.

ken_13.PNG


Remember, there are requirements bots and agents cannot satisfy. Define some core principles or house rules with your prospective product owners and test their ideas against them. It can lead you to high value business solutions.