Harnessing Microsoft’s Business Application Platform

The Power Platform (now dubbed: Business Application Platform) started life as a collection of three products introduced into the Office 365 portfolio: [Power]Apps for lightweight business applications, [Power]Automate for business process automation, and [Power]BI for reporting & insights. It now has a fourth constituent dubbed [Power]Virtual Agents; a ‘low code’ solution to develop bots (for use in your forefront collaboration solutions like Microsoft Teams).

The platform rolls with a framework for the management of data and information that is shared by Microsoft’s Dynamics 365 service, called the Common Data Service and Common Data Model, respectively. Here’s where you can capture and interact with your data model if you’re not building solutions with a high synergy with SharePoint Online.

The Business Application Platform is a hot property right now, and organisations are looking for opportunities to evaluate and pilot it’s capabilities. I’ve seen a surge in requests for partner support to deliver business solutions powered by the platform.

So why, subject to a case by case evaluation, do I find myself concluding that in some scenarios, the Business Application Platform is not the right solution?

OK. Put the pitchforks down, and hear me out. I’m not a blind evangelist. I think the platform is great but that doesn’t mean it’s right for every scenario.

In this article, I’ll be examining what’s required to make the Business Application Platform a viable option for your organisation, and evaluating it against other comparative enabling technologies.

As a Service

The clue is in the name: Business Application Platform. It’s a platform capability. Is it a good idea to develop solutions for a platform that has not been properly embedded within your organisation?

I’ve seen organizations take the following approaches:

  • They ban/block usage of the Business Application Platform due to security concerns, predominantly around access to and usage of business data. (I realise this is less about the platform, and more a concern that existing security vulnerabilities might be exposed or exploited).
  • They enable the Business Application Platform, but restrict it to usage within a qualified group. This is a temporary situation, mitigating to concerns around who gets to deliver solutions on it, and more importantly, who supports those solutions.
  • They launch the Business Application Platform, perhaps with injected Change Management. Solutions start appearing and there is an implicit expectation around IT support, IT get nervous that they’re not fully across what’s happening out there.

Landfall: The legacy of Excel

The concern over who owns and supports what is nothing new. It was happening 20 years ago with Excel. Consider this scenario:

  • Excel is used in the Accounting team for bookkeeping
  • Alex, from Accounting takes a course in Visual Basic for Applications.
  • They decide to play with Excel and modify the bookkeeping workbook to automate some things.
  • It’s super effective! The rest of the team think it’s awesome. Alex makes more modifications and starts teaching the rest of the group.
  • Fast forward 6 months: what was a basic workbook is now a fully fledged business (critical) application.
  • Morgan, the head of Accounting, is getting nervous. What if Alex moves on? What if there’s a problem with solution and the team can’t fix it?
  • Morgan approaches Jules in IT support, with an expectation that they can support the solution and have the skills to do it….

The keyword here is expectation. And its established as part of a service governance plan:

Rule Number 1: Set expectations with the consumers of your service so they understand roles, responsibilities and accountability. Do this before you deploy the Business Application Platform.

This brings me to landfall. It’s the term I use to describe the process for transitioning technical ownership of a solution from a citizen developer (or business unit) to a formal IT support function. The Business Application platform is targeted at everyone, including business users, and trust me, you want to put it into their hands because you’re giving them tools to solve problems and be productive. In short: you need to define and communicate a process that transitions a solution from the business into IT support as part of your governance plan.

Rule Number 2: Define and communicate a process for landfall in your governance plan

You can design a foundation for the Business Application Platform that meets your requirements for delegation of administration, and in anticipation of transfer of ownership. For example: the creation of additional (logical) environment for IT owned and managed solutions that sits alongside the default sandbox environment that’s created with your tenancy.

Evaluating Solutions for the Business Application Platform

I work with customers to review business requirements and evaluate enabling technology. Often I see solutions masquerading as requirements driven by funding incentives, a need to innovate and adopt new technology and generate a case study. I get it.

There are some gotchas: considerations beyond whether the technology can deliver a solution. There are comparative enabling technologies and key differentiators, even within the Microsoft stack. For example:

Alternatives to PowerApps for presentation and data collection include Microsoft Forms, the SharePoint Framework, Single Page Applications or fully fledged web applications.

Alternatives for Power Automate include Azure Logic Apps (it shares the same foundation as Power Automate) and Azure Functions . You’ve also got commercial off the shelf workflow automation platforms such at Nintex and K2. Consider ongoing use of SharePoint 2010 or 2013 workflow in SharePoint Online a burning platform.

Power Virtual Assistants are an alternative to going bespoke with the Microsoft Bot Framework.

Rule Number 3: Evaluate requirements against comparative technologies with an understanding of key differentiators, dependencies and constraints.

So what are some of the key considerations?

Cost

The licensing model for the Business Application Platform is multi-tiered, so your license determines what’s in your toolbox. It might restrict use of specific connectors to line of business applications, the ability to make a simple HTTP request to a web service, or how a dashboard might be published and shared. Don’t commit to a Business Application Platform solution only to be stung with P2 licensing costs down the line.

Size and Complexity

Business Application Platform solutions are supposed to be lightweight. Like doing the COVID-19 check-in via Teams. Just look at the way you port solutions between environments. Look at the way they are built. Look at the way the platform is accessible from Office 365 applications and services. Large and complex solutions built for the Business Application Platform are arguably as hard to support and maintain as their ‘as code’ counterparts.

Synergy with Development Operations Cadence:

Let’s assume your organisation has an established Development Operations (DevOps) capability, and there’s a process in place for building, testing, and delivering business solutions, and tracking technical change. It may, for example advocate Continuous Integration and Delivery.

Along comes the Business Applications Platform, and a different method to build, deploy and port solutions built on the platform. It’s immediately at odds with your cadence. Good luck with the automation.

Technologies such as Logic Apps may be more suitable, given that solutions are built and deployed as code.

Synergy with Office 365

It’s not a hard constraint, but Business Application Platform solutions are a better fit if there is a high synergy with Office 365 applications and services. The current experience enables business users to build solutions from within the Office 365 ecosystem, and with an pre-defined context (e.g. a SharePoint Document Library).

Solutions that require integration with a broader set of systems may warrant the use of alternative enabling technologies, especially if additional plumbing is required to facilitate that connectivity. Do you break your principals around ‘low code’ solutions if there’s now a suite of Azure Functions on which your Flow is dependent run your business logic?

Ownership

Business users have an appreciation for a solution delivery lifecycle, but they’re not developers. The Business Application Platform is designed to empower them and is comes with tools required to design, build, test and publish solutions. Your decision to use the Business Application Platform informed by a strategy to have business users own and maintain their solutions. You get the foundations right in terms of security and governance, they’re not breaking the rules.

Maturity

Is the Business Application Platform an established service in your enterprise? If you’re leaning on partner support to crank out those solutions for you, are you ready to support and maintain them?

Low Code?

I see the term ‘no code’ or ‘low code’ everywhere. You don’t need developers! Anyone can do it! It’s cheaper.

Here’s a fact: It’s possible to build a monstrosity of a ‘low code’ solution and it’s possible to take a lot of time do to it. Try building a complex UI in PowerApps. Go on, try.

I prefer the term no-hassle instead. The Business Applications Platform is ready to use and all the technical bits are there. All you need is the license, and the skills. Keep it small and simple.

You want the ‘no hassle’ benefit to apply to service owners, administrators and consumers alike. There’s a balance here and decisions impacting one group may be to the detriment of the others.

Rule Number 4: Reach a consensus on the definition for ‘Low Code’ and what needs to be in place to realise the benefits


In summary, the Business Application Platform is a game changer but it needs to be delivered as a platform capability. The solution that’s going to net your your case study is the first of many that will follow, but not all solutions are right for the platform. Hopefully this article provides you with some pointers around how you evaluate the platform as a potential enabling technology; it’s a culmination of what I’ve learned.

Generating Document Metadata using the Power Platform (part 2)

AI Builder

(This is part two of a two-part article covering AI Builder and the Power Platform. I recommend you skim through part one for some much needed context).

In part one of this two-part series we created an AI Builder model using the Form Processing model type and trained it to extract and set the values for selected fields in documents of a specific type (in my case: a statement of work).

We now have the blueprint for a Content Type for use in SharePoint Online:

schema

In this article (part two of the series), we’ll be creating a Flow using Power Automate. It will use our AI Builder model to extract and store metadata for statements of work uploaded to a SharePoint Online.

(I’ll continue to use the statement or work as an example throughout the series. Feel free to substitute it and it’s selected fields, for something more applicable in your own organisation).

Prerequisites

Before we create the Flow, we’ll need to perform the following actions (the Site Owner role will give you sufficient rights to complete this work). I don’t want to go into detail here but I’ve linked to supporting articles if you’re new to SharePoint Online.

With the prerequisite setup complete, our Document Library settings should look something like this:

With that done, we’re ready to build our Flow!

Design

Before I build a Flow using Power Automate, I like to sketch a high level design to capture any decision logic and get a better feel of what the workflow has to do. We can worry about how these steps work when we implement. Here’s what I want the Flow to do:

There are three sub-processes in the Flow. The trigger will be the creation of a document in our Document Library (manual or otherwise). If that document isn’t in pdf format, we’ll need to convert it. This is because AI Builder’s Form Processing model does not support documents in native (Microsoft) Office format. Finally, we’ll need to send the document off to AI Builder so it can be analysed.

Creating the Flow

OK, let’s get building our Flow. The solution is technically a ‘no-code’ solution but we’re going to create some expressions to handle things like token substitution. Think: Excel, not Visual Studio Code.

Creating a Flow is easy. Simply head on over to flow.microsoft.com, sign in and hit + Create. Our Flow will kick-in when someone uploads a document to SharePoint Online, so select the Automated flow template:

The first building block of an automated flow is the trigger. Select (or search for) the trigger called When an item is created or modified and hit Create.

Configuring the trigger is easy. Simply pick your Site Address from the list provided, and specify the List Name (set a Custom Value if Power Automate is having a hard time resolving your List Name, as it did frequently for me). Click + New Step when ready.

Handling Variables

Next, we need to create some variables to store values we’ll need to reference along the way. The Initialize Variable action here to help, so we’ll create one for each variable we need.

  • Initialize a variable called FolderPath (type: string) and set it’s value using the following expression:
substring(triggerOutputs()?['body/{Path}'],0,add(length (triggerOutputs()?['body/{Path}']),-1))

(this is removing the trailing ‘/’ from the relative path to the Document by using the List Item’s native Path property. It sucks but we need pass this value to a web service later in the Flow and it just breaks if you keep that trailing ‘/’ there. )

  • Initialise a variable called PDFUrl (type: string) and leave it’s Value blank for now (this will store a reference to the converted PDF we create later in the Flow).
  • Initialize a variable called PDFContent (type: object) and leave it’s Value blank for now (this will store the content to be passed to AI Builder for conversion later in the Flow).

At this point in the process, your Flow should look like this:

(Note: You can rename your triggers and actions to help with readability, as shown in the image above).

Decision Logic

Next, we need another action to perform a check to see if our document is in pdf format. Add another action and search for: Condition. We can use the File Name property in Dynamic content (created by our trigger) and simply check the last three characters in that value (i.e. the file extension).

At this point our Flow looks like this and must split into two. The ‘yes’ branch (in cases where the document is is pdf), and the ‘no’ branch (anything else). We’ll handle the ‘yes’ branch first as it’s the easiest.

The ‘Yes’ Path

We’ll need to create a reference to the content of the document and pass that the AI Builder for analysis. The good news is that if the document is already in pdf format, we can use the Get file content using path action and then assign the value from that to the PDF Content variable we created earlier in the Flow:

  • Add a Get file content using path action to the Yes branch of the Flow. Set the File Path to the Full Path property (under dynamic content) that was created by our trigger.
  • Next, add a Set Variable action. We’re going to assign the File Content property (under dynamic content) created by the previous action to the PDF Content variable we initialized earlier.

Still with me? Your setup for the ‘yes’ branch’ should look like this:

The ‘No’ Path

It gets a little tough here, but stick with me. If the document is not in pdf format, we must convert it.

There are a number of options here, including the use of third party actions you can purchase to handle conversion for you. But there is a way to use SharePoint Online to convert the document for you. For this solution I’ve drawn from prescriptive guidance published by Paul Culmsee (@paulculmsee) to help with my solution. Full credit to him for figuring this stuff out!

Conversion works like this:

  • We issue a request to SharePoint Online for the document payload (including valuable meta-data)
  • We parse the resulting JSON so that the meta-data we need is cached by Flow as dynamic content
  • We assemble the URL to the converted pdf taking bits from the JSON payload.
  • We issue a HTTP request for the converted pdf and store the body of the response in our PDFContent variable.

OK, let’s go:

  • Add a Send a HTTP request to SharePoint action to the No branch of the Condition. The Uri we want to invoke is as follows:
_api/web/lists/GetbyTitle('<LIST_NAME>')/RenderListDataAsStream?FilterField1=ID&FilterValue1=<DOCUMENT_ID>

(We need to swap out the <LIST NAME> with the name of your Document Library and the <DOCUMENT ID> with the ID property of the document that was uploaded there. Fortunately the ID property is available as dynamic content).

The parameters to this request must contain instructions to return a Uri to the pdf version of the document, so we include the following in the body of our request:

{ 
    "parameters": {
       "RenderOptions" : 4103,
       "FolderServerRelativeUrl" : "/sites/sandpit/@{variables('FolderPath')}"
    }
}

(here we need to send the FolderPath with trailing ‘/’ removed, so we’re using the FolderPath variable we set at the start of the Flow).

So your action configuration should look similar to this:

Next, we need a copy of the payload schema returned by this service call so we can reference it as dynamic content in our Flow. The easiest way to do this is to Test the Flow and copy the schema. Hit the Test button and select I’ll perform the trigger action and hit Save and Test.

Flow will wait patiently for you to upload a document to your Library to trigger the Flow. Once done, steps in your flow will be ticked off as they are tested. Examine the Send a HTTP to SharePoint action and copy the body content to your clipboard (we’ll use this to create the next action)

  • Switch back to edit mode and add a Parse JSON action. We’re going to assign the body property (under dynamic content) created by the previous action to the Content property. To set the schema property, select Generate from sample and paste the body content you copied to your clipboard when you ran the test.

Your action configuration should look similar to this:

We need to parse the output of the HTTP request because we’re using key values in the payload to create the Url to the converted PDF in our next step.

  • Add a Set Variable action. We’re going to use the following expression to assemble our Url:
@{body('Parse_JSON')?['ListSchema']?['.mediaBaseUrl']}/transform/pdf?provider=spo&inputFormat=@{first(body('Parse_JSON')?['ListData']?['Row'])?['File_x0020_Type']}&cs=@{body('Parse_JSON')?['ListSchema']?['.callerStack']}&docid=@{first(body('Parse_JSON')?['ListData']?['Row'])?['.spItemUrl']}&@{body('Parse_JSON')?['ListSchema']?['.driveAccessToken']}

Your action should look like this:

  • The .mediaBaseUrl property contains the URL to the platform’s media content delivery service (e.g. australiaeast1-mediap.svc.ms)
  • The first expression parses the file type, to pass the format to convert from (e.g. docx)
  • The second expression parses the uri to the pre-converted document in SharePoint Online.
  • The .callerStack and .driveAccessToken properties are encoded strings (tokens) providing useful session context.

(If you want, Test the Flow at this stage. Copy the output of the Set PDF Url action into your browser. If it’s correct, it will display a PDF version of the document you submitted at the start of the test).

  • Next, we need the Flow to request the converted pdf directly. Add a HTTP action to your Flow. (Astonishingly, this action requires a premium license). Assign the PDFUrl variable to the URI property.

Your action should look like this:

  • The last action in your ‘no’ branch will assign the body of the response generated by the HTTP action (as dynamic content) to the PDFContent variable we initialized at the start of the Flow.

Your action should look like this:

Quick Recap

At this stage, your Flow has two branches based on a test to see if the document format is of type pdf. Each branch ultimately set’s the PDFContent variable we’ll use for the final step. In the yes branch, the content is a snapshot of the document uploaded to SharePoint Online. In the no branch, it’s a snapshot of a pdf conversion of that document. Here’s what it should look like:

Calling AI Builder

The final step of the flow occurs when the two branches converge. We’ll invoke the AI Builder model we created in part one of this article and then update the document’s List Item once we have the meta-data.

  • Search for an action called Process and save information from forms and add it to your Flow where the conditional branches converge. Select your model from the list provided. Use the PDFContent variable (as dynamic content) here.

Your action should look like this:

  • Next, we need to take the output of the analysis and use it to update the column data for the statement of work content type in SharePoint Online. Add an Update Item (SharePoint) action to the Flow. Once you specify the List Name the fields will be loaded in for you. Simply use the dynamic content panel to assign values returned from AI Builder to your columns.

Your action should look like this:

As a stretch goal, you can add some resilience to your Flow by adding some conditional logic when you update your list item. AI Builder passes it’s confidence scores back to Flow, so you could, for example, update the list item only if the confidence score is within a specific threshold. For example:

The Final Solution

Your final solution should look something similar to this:

As statement of work documents are uploaded to the Document Library, Flow will pick them up and push the content to your model in AI Builder for analysis. The result is a complete set of metadata for each document without the need to manual intervention.

Final things to note:

  • Processing is asynchronous and will take a few seconds (especially if conversion is needed), so the meta-data will not be available immediately after the document is uploaded. My Flow completed on average in about 15-25 seconds. Your mileage may vary.

Wow. That was quite a lot. Thanks for sticking with me. In this series, we trained and published a model using AI Builder and then created a Flow using Power Automate to invoke it.

The solution is able to analyse documents at they are uploaded to SharePoint Online. Column values are extracted from the document as AI Builder runs the analysis. The Flow updates the associated List Item, assigning values to our Columns.

Generating Document Metadata using the Power Platform (part 1)

AI Builder

I hate filling in forms. Really, I do. Imagine if you had to fill in a form each time you uploaded a document into SharePoint Online? You do? I feel for you.

For some time I’ve been wanting to look at how my organisation can use tools such AI Builder to help auto-generate metadata for types of documents we store and manage in SharePoint Online.

At the time of writing, AI Builder is about 12 months old following it’s preview, and it’s been aligned to the Power Platform. The solution includes a form processing AI model, which can be trained to extract named properties from your documents.

I wanted users to be able to search, sort and filter on key pieces of information contained in our statements of work (contracts). The logical solution was to create columns to store this information, given the content is hosted in SharePoint Online. From there, I could convert these columns into managed properties, for use in search refiners and as referenceable values in other solutions.

This is part one of a two-part series designed to walk you through the steps to develop a no-code solution to analyse documents and automatically extract and set associated metadata. (you can find part two here). Our solution will use the following building blocks:

  • SharePoint Online
  • AI Builder
  • Power Automate

From a licensing perspective, you’ll need a Power Automate P2 (Premium) to re-create my solution and an E1 licence or better to configure the SharePoint Online components.

This article (part one) covers the AI Builder component. Part 2 covers the SharePoint Online / Power Automate component.

Create your Model

You can access AI Builder via PowerApps or Power Automate. Here you can select a pre-fabricated model and train it. The models you train and publish are stored centrally and visible to all your Power Platform solutions so it doesn’t matter how you access AI Builder.

We’re going to use the Form Processing model and train it to extract information from our statement of work documents.

Some important things to note:

  • At the time of writing, AI builder can only analyse content in pdf, png, jpg and jpeg format (don’t worry, our solution will handle conversion from native Office document formats).
  • Start with 5 examples of the document you want the solution to analyse. Ensure they contains examples of the meta-data you want to extract. This is because the field selection process targets a single document and you don’t get to choose which one if you start with many documents.
  • For a reliable model, upload at least 5 other training documents after field selection. Manual effort is needed to teach AI Builder how to analyse each document. Put these in the same location so you can bulk upload them in one action.
  • It’s important to introduce some variety here (in my case, there were several key variations of our statement of work I wanted AI Builder to recognise). A set of completely different, unrelated documents will skew your model.

Upload

First, convert any training documents to pdf format if they are in a native Office document format.

You’ll be prompted to upload your documents. Remember you can add more training documents later if you like.

Once you’ve uploaded your documents, hit that Analyse button to give AI Builder an initial look.

Review

Now you’ll have the opportunity to identify the metadata you want to extract from similar documents. AI Builder may take the initiative and create some for you. In any case, keep what you want. You can come back after saving the model to repeat this process.

AI Builder will present an example document to you for review. Simply highlight the content you wish the model to extract from the document and create an associated field name. As mentioned earlier, you can to make sure you can find all your fields in this document.

When you’re done with the document, hit that Confirm Fields button and you’re ready to review the rest of your payload.

This review is both manual and sequential, one document at a time. Your progress is tracked as you go. You’ll need to identify and assign values to each field you created.

(In cases where a field does not occur in one of your documents, or it’s value is not set, then you can select the Field not in document option).

Note that if you add more documents to the model, you’ll need to run through this process for each new document you added. You won’t have to repeat the process for documents already reviewed unless you add, modify or remove a selected field.

At the end of the review process, you’ll be able to click the Done button. On clicking Next you’ll be presented with a model summary.

Training

The final step before testing is to Train your model. This will save it and the training will happen in the background. Keep an eye on the Status. It will read Trained when training is complete. You’re now ready to test your model before publishing it.

Testing

So here I have my model that I have lovingly called SOW_Processing . I needed 20 documents to fully train my solution. There are 12 selected fields my model will extract during analysis. Your own end state will of course vary.

Listing the field names here is handy, as we’ll need a Content Type to represent a statement of work and Columns to hold this metadata in our SharePoint Online solution. We’ll need a blueprint for the SharePoint Online setup. Let’s go with this:

Here we’re testing the reliability of the model. It’s not fit for purpose if, for example, it can’t identify correct values for the selected fields in a statement of work presented to it.

The Quick test allows us to see how well a model can identify the correct metadata in documents we present to it for analysis. Make sure you use new documents (not used to train the model) for testing.

Upload a document and have the model analyse it. On scanning the document, you’ll find parts of it highlighted, indicating that the model has found a value for one of it’s selected fields.

The Confidence score is important. This represents the degree of certainty the model has that the value for the selected field is correct.

Some important things to note:

  • Training is an iterative process. You’re supposed to re-train your model over time, introducing new variations to it.
  • Set a minimum confidence threshold for each selected field as part of your acceptance criteria for the model.
  • Scores between 80 and 100 are good. You cannot guarantee a confidence score of 100 even if you train your model extensively, since new variations can always be presented to it.
  • Scores below 80 introduce risk and below 50 are indicative of an unreliable model. An unreliable model is, in my opinion, not fit for purpose (since you can’t trust the data). The solution here is to train your model using your test documents (by uploading them to the model and re-training it). This will teach your model to recognise such variations in future tests.
  • You can use the confidence score in applications (such as Flows or Apps) as part of your validation logic! (more on that in part 2).

Publication

So, let’s assume you’ve run some Quick tests, and you’re model is identifying values for selected fields with a confidence score that is within the threshold set in your acceptance criteria. Great! You can now publish it by selecting Publish.

Your model is ready for use.


Now that we’ve created an AI Builder model to extract selected fields from our statement of work documents, we can develop a Power Automate solution to extract and set the metadata for these documents when they get uploaded to SharePoint Online!

Head on over to part two where we’ll cover the setup and configuration of the workflow (Power Automate) component of the solution.