The Yammer Roast

The Yammer Roast

Taking my inspiration from Comedy Central, the Yammer Roast is a forum in which we can directly address resistances around Yammer, its role, and past failures in retrospect.

Some of my clients have tried with Yammer and concluded that for various reasons it’s failed to take hold. For some the value is clear and it’s a case of putting a compelling approach and supporting rationale to sponsors and consumers who remain sceptical. For others, they are looking for a way to make it work in their current collaboration landscape.

The Yammer roast is designed to tease out, recognise and address key resistances. It’s not an opportunity to blindly evangelise Yammer; it’s an exercise in consulting to provide some clarification around Yammer as a business solution, and what’s needed for a successful implementation.

In this article, I’ll cover some of the popular resistances aired at Yammer Roasts, why these resistances exist and how you can address them. If you’re an advocate for social networking in your own organisation, my hope is that this can inform your own discussion.

  1. We have concerns over impropriate usage, distraction from proper work

There’s a perception that Yammer is a form of distraction and employees will be off posting nonsense on Yammer instead of doing proper work. Even worse, they may be conducting themselves inappropriately.

A self-sustaining Yammer network has to find that balance between [non-work stuff] and [work stuff], and it needs an element of both to be successful. Informal, social contributions beget more meaningful work contributions.

Consider what is perceived as informal, non-work-stuff to be valuable. That person who just posted a cat picture? They are adopting your platform. As are those people who liked or commented on it. Consider the value to the organisation if people are connecting with each other and forming new relationships, outside of the confines of an organisational hierarchy.

Assume your employees know how to conduct themselves and can practice good netiquette. They signed a contract of employment which includes clauses pertaining to code of conduct. Perhaps refer to that in your terms of use.

Establish a core principal that no contribution should be discouraged. It really doesn’t matter where content in Yammer is generated, and any one person’s view of the content is informed by who they follow, the groups they subscribe to and the popularity of content. Uninteresting, irrelevant content is quickly hidden over time.  “But what if someone puts a cat picture in the All Company feed?” So what? What if the CEO likes it? Consider creating a foundational set of groups to ensure that on day one there’s more than just the All Company feed.

Strike that balance between work-stuff and non-work stuff.  Set an objective for your community manager (yes, a formal responsibility!) to help combat potential stage fright; there are numerous incentives and initiatives that can come into play here.  Accept the fact that your social network will, and should, grow organically.

  1. We’ve got Yammer and no-one is using it.

…but your partners and vendors are and they’re looking to collaborate with you.

Stagnant networks; a common scenario. Your organisation may be looking at alternative platforms as a way to reset/relaunch. Here, you lament the lack of tangible, measurable business outcomes at the outset of the initial rollout or the lack of investment in change management activities to help drive adoption of the platform.

You’ll smile and nod sagely, and perhaps talk to a view similar to the following:

But, for whatever reason, you’re here. So how can past experiences inform future activities?

Whether you use Yammer or not, the success of your social network in its infancy is dependent on measurable business outcomes. Without the right supporting campaign, a way to track adoption and a way to draw insight from usage, you effectively roll the dice with simply ‘turning it on’. Initiatives around Yammer can start small with a goal of communicating the success (of a process) and subsequently widening its application within your business.

Simply swapping out the technology without thinking about the business outcome may renew interest from sponsors who’ve lost faith in the current product, but you risk a rinse and repeat scenario.

“But we’re dependent on executive sponsorship!” I hear you lament. This is a by-product of early boilerplate change campaigns, where success somehow rested on executives jumping in to lead by example. Don’t get me wrong, it’s great when this happens. From my perspective, you need any group within your business with a value use case and the willingness to try. You have O365, the technology is there.

You can consider the Yammer client to not just be a portal into your network, but the networks of your vendors and partners. Access to your partner/vendor product teams (via Yammer External Networks) and being able to leverage subject matter expertise from them and the wider community is a compelling case in the absence of an internal use case.

Combatting any negative perceptions of your social network following a failure to launch is all about your messaging, and putting Yammer’s capability into a wider context, which leads me to…

  1. But we’re using Teams, Slack, Jabber, Facebook for Workplace (delete as appropriate)

Feature parity – it can be a head scratcher. “But we can collaborate in Skype. And Teams! And Yammer! And via text! What will our users do?” Enterprise architects will be advocating the current strategic platform in the absence of a differentiator, or exception. Your managed services team will be flagging additional training needs. There will be additional overheads.

If you’re there to champion Yammer in the face of an incumbent (or competing) solution, you need to adopt the tried and tested approach which is 1. Identify the differentiator and align the new technology (i.e. Yammer) to it, 2. Quantify the investment, and 3. Outline the return on investment.

As a consultant my first conversations are always focused around the role Yammer will play in your organisation’s collaboration landscape. The objective is to ensure initial messaging about Yammer will provide the required clarity and context.

This reminds me of an engagement some time ago; an organisation with a frontline workforce off the radar forming working groups in Facebook. “We aren’t across what’s going on. We need to bring them over to Yammer.” Objective noted, but consider the fact that a) these users have established their networks and their personal brand, b) they are collaborating in the knowledge that big brother isn’t watching. Therefore, there’s no way in hell they’ll simply jump ship. The solution? What can you provide that this current solution cannot? Perhaps the commitment to listen, respond and enact change.

The modern digital workplace is about choice and choice is okay. Enable your users to make that informed decision and do what is right for their working groups.

  1. It’s another app. There’s an overhead to informing and educating our business.

Of course there is. This is more around uncertainty as to the strategy for informing and educating your business. Working out the ‘what’s in it for me?’ element.

There is a cost to getting Yammer into the hands of your workforce. For example, from a technical perspective, you need to provide everyone with the mobile app (MAM scenarios included) and help users overcome initial sign-in difficulties (MFA scenarios included). Whatever this may cost in your organisation, your business case needs to provide a justification (i.e.) return on that investment.

Campaign activities to drive adoption are dependent on the formal appointment of a Community Manager (in larger organisations), and a clear understanding around moderation. So you do need to create that service description and governance plan.

I like to paint a picture representing the end state – characteristics of a mature, self-sustaining social network. In this scenario, the Yammer icon sits next to Twitter, Instagram, Facebook on the mobile device. You’re a click away from your colleagues and their antics. You get the same dopamine rush on getting an alert. It’s click bait.  God forbid, you’re actually checking Yammer during the ad-break, or just before bed time. Hang on, your employee just pointed someone in the right direction, or answered a question. Wait a second! That’s voluntarily working outside of regular hours! Without pay!

  1. Yammer? Didn’t that die out a few years ago?

You’ve got people who remember Yammer back in the days before it was a Microsoft product. Yammer was out there. You needed a separate login for Yammer. There were collaboration features built into Microsoft’s SharePoint platform but they sucked in comparison, and rather than invest in building competitive, comparative features into their own fledgling collaboration solution, Microsoft acquired Yammer instead.

Roll out a few months, and there’s the option to swap out social/newsfeed features in SharePoint for those in Yammer, via the best possible integration at the time (which was essentially link replacement).

Today, with Office 365, there’s more integration. Yammer has supported O365 sign-in for a couple of years now. Yammer functions are popping up in other O365 workloads. A good example is the Talk about this in Yammer function in Delve, which then frames the resulting conversation from Yammer within the Delve UI: From an end user experience perspective there is little difference between Yammer now and the product it was pre-Microsoft acquisition, but the product has undergone significant changes (external groups and networks for example). Expect ongoing efforts to tighten integration with the rest of the O365 suite, understand and address the implications of cutting-off that functionality.

The Outcome

Yammer (or your social networking platform of choice) becomes successful when it demonstrates a high value role in driving your organisation’s collaborative and social culture. In terms of maturity we’re taking self-sustaining, beyond efforts to drive usage and lead by example.

Your social network is an outlet for everyone in your organisation. People new to your organisation, will see it as a reflection of your collaborative and social culture; give them a way to connect with people and immediately contribute in their own way.

It can be challenging to create such an outlet where the traditional hierarchy is flattened, where everyone has a voice (no matter who they are and where they sit within the organisation). Allowing online personalities to develop without reluctance and other constraints (“if it’s public, it’s fair game!”) will be the catalyst to generating the relationships, knowledge, insight (and resulting developments) that will improve your business.

Bots: An Understanding of Time

Some modern applications must understand time, because the messages they receive contain time sensitive information. Consider a modern Service Desk solution, that may have to retrieve tickets based on a date range (the span between dates) or a duration of time.

In this blog post, I’ll explain how bots can interpret date ranges and durations, so they can respond to natural language queries provided by users, either via keyboard or microphone.

First, let’s consider  the building blocks of a bot, as depicted in the following view:

The client runs an application that sends messages to a messaging endpoint in the cloud. The connection between the client and the endpoint is called a channel. The message is basically something typed or spoken by the user.

Now, the bot must handle the message and provide a response. The challenge here is interpreting what the user said or typed. This is where cognitive services come in.

A cognitive service is trained to take an message from the user and resolve it into an intent (a function the bot can then execute). The intent determines which function the bot will execute, and the resulting response to the user.

To build time/date intelligence into a bot, the cognitive service must be configured to recognise date/time sensitive information in messages, and the bot itself must be able to convert this information into data it can use to query data sources.

Step 1: The Cognitive Service

In this example, I’ll be using the LIUS cognitive service. Because my bot resides in an Australia based Azure tenant, I’ll be using the https://au.luis.ai endpoint. I’ve created an app called Service Desk App.

Next, I need to build some Intents and Entities and train LUIS.

  • An Entity is an thing or phrase (or set of things or phrases) that may occur in in an utterance. I want LUIS (and subsequently the bot) to identify such entities in message provided to it.

The good news is that LUIS has a prebuilt entity called datetimeV2 so let’s add that to our Service Desk App. You may also want to add additional entities, for example: a list of applications managed by your service desk (and their synonyms), or perhaps resolver groups.

Next, we’ll need an Intent so that LUIS can have the bot execute the correct function (i.e. provide a response appropriate to the message). Let’s create an Intent called List.Tickets.

  • An Intent, or intention represents something the user wants to do (in this case, retrieve tickets from the service desk). A bot may be designed to handle more than one Intent. Each Intent is mapped to a function/method the bot executes.

I’ll need to provide some example utterances that LUIS can associate with the List.Tickets intent. These utterances must contain key words or phrases that LUIS can recognise as entities. I’ll use two examples:

  • “Show me tickets lodged for Skype in the last 10 weeks”
  • “List tickets raised for SharePoint  after July this year”

Now, assuming I’ve created an list based entity called Application (so LUIS knows that Skype and SharePoint are Applications), LUIS will recognise these terms as entities in the utterances I’ve provided:

Now I can train LUIS and test some additional utterances. As a general rule, the more utterances you provide, the smarter LUIS gets when resolving a message provided by a user to an intent. Here’s an example:

Here, I’ve provided a message that is a variation of utterances provided to LUIS, but it is enough for LUIS to resolve it to the List.Tickets intent. 0.84 is a measure of certainty – not a percentage, and it’s weighted against all other intents. You can see from the example that LUIS has correctly identified the Application (“skype”), and the measure of time  (“last week”).

Finally, I publish the Service Desk App. It’s now ready to receive messages relayed from the bot.

Step 2: The Bot

Now, it’s possible to create a bot from the Azure Portal, which will automate many of the steps for you. During this process, you can use the Language Understanding template to create a bot with a built in LUISRecognizer, so the code will be generated for you.

  • Recognizer is a component (class) of the bot that is responsible determining intent. The LUISRecognizer does this by relaying the message to the LUIS cognitive service.

Let’s take a look at the bot’s handler for the List.Tickets intent. I’m using Node.js here.

The function that handles the List.Tickets intent uses the EntityRecognizer class and findEntity method to extract entities identified by LUIS and returned in the payload (results).

It passes these values to a function called getData . In this example, I’m going to have my bot call a (fictional) remote service at http://xxxxx.azurewebsites.net/Tickets. This service will support the Open Data (OData) Protocol, allowing me to query data using the query string. Here’s the code:

(note I am using the sync-request package to call the REST service synchronously).

Step 3: Chrono

So let’s assume we’ve sent the following message to the bot:

  • “List tickets raised for SharePoint  after July this year”

It’s possible to query an OData data source for date based information using syntax as follows:

  • $filter=CreatedDate gt datetime’2018-03-08T12:00:00′ and CreatedDate lt datetime’2018-07-08T12:00:00′

So we need to be able to convert ‘after July this year’ to something we can use in an OData query string.

Enter chrono-node and dateformat – neat packages that can extract date information from natural language statements and convert the resulting date into ISO UTC format respectively. Let’s put them both to use in this example:

It’s important to note that chrono-node will ignore some information provided by LUIS (in this case the word ‘after’, but also ‘last’ and ‘before’), so we need a function to process additional information to create the appropriate filter for the OData query:


Handling time sensitive information is a crucial when building modern applications designed to handle natural language queries. After all, wouldn’t it be great to ask for information using your voice, Cortana,  and your mobile device when on the move! For now, these modern apps will be dependent on data in older systems with APIs that require dates or date ranges in a particular format.

The beauty of languages like Node.js and the npm package manager is that building these applications becomes an exercise in assembling building blocks as opposed to writing functionality from scratch.

Getting Started with Adaptive Cards and the Bot Framework

This article will provide an introduction to working with AdaptiveCards and the Bot Framework. AdaptiveCards provide bot developers with an option to create their own card templates to suit variety of different scenarios. I’ll also show you a couple of tricks with Node.js that will help you design smart.

Before I run through the example, I want to point you to some great resources from adaptivecards.io which will help you build and test your own AdaptiveCards:

  • The schema explorer provides a breakdown of the constructs you can use to build your AdaptiveCards. Note that there are limitations to the schema so don’t expect to do all the things you can do with regular mark-up..
  • The schema visualizer is a great tool to enable you (and your stakeholders) to give the cards a test drive.

There are many great examples online (start with GitHub), so you can go wild with your own designs.

In this example, we’re going to use an AdaptiveCard to display an ‘About’ card for our bot. Schemas for AdaptiveCards are JSON payloads. Here’s the schema for the card.

This generates the following card (go play in the visualizer):

We’ve got lots of %placeholders% for information the bot will insert at runtime. This information could be sourced, for example, from a configuration file collocated with the bot, or from a service the bot has to invoke.

Next, we need to define the components that will play a role in populating our About card. My examples here will use node.js. The following simple view outlines what we need to create in our Visual Studio Code workspace:

The about.json file contains the schema for the AdaptiveCard (which is the code in the script block above). I like to create a folder called ‘cards’ in my workspace and store the schemas for each AdaptiveCard there.

The Source Data

I’m going to use dotenv to store the values we need to plug into our AdaptiveCard at runtime. It’s basically a config file (.env) that sits with your bot. Here we declare the values we want inserted into the AdaptiveCard at runtime:

This is fine for the example here but in reality you’ll probably be hitting remote services for records and parsing returned JSON payloads, rendering carousels of cards.

The Class

about.js is the object representation of the card. It provides attributes for each item of source data and a method to generate a card schema for our bot. Here we go:

The constructor simply offloads incoming arguments to class properties. The toCard() method reads the about.json schema and recursively does a find/replace job on the class properties. A card is created and the updated schema is assigned to the card’s content property. The contentType attribute in the JSON payload tells a handling function that the schema represents an AdaptiveCard.

The Bot

In our bot we have a series of event handlers that trigger based on input from the user via the communication app, or from a cognitive service, which distils input from the user into an intent.

For this example, let’s assume that we have an intent called Show.Help. Utterances from the user such as ‘tell me about yourself’ or quite simply ‘help’ might resolve to this intent.

So we need to add a handler (function) in app.js that responds to the Show.Help intent (this is called a triggerAction). The handler deals with the dialog (interaction) between the user and the bot so we need it to both generate the About card and handle any interactions the card supports (such as clicking the Submit Feedback button on the card).

Note that the dialog between user and bot ends when the endDialog function is called, or when the conditions of the cancelAction are met.

Here’s the code for the handler:

The function starts with a check to see if a dialog is in session (i.e. a message was received). If not (the else condition), it’s a new dialog.

We instantiate an instance of the About class and use the toCard() method to generate a card to add to the message the bot sends back to the channel. So you end up with this:


And there you have it. There are many AdaptiveCard examples online but I couldn’t find any for Node.js that covered the manipulation of cards at runtime. Now, go forth and build fantastic user experiences for your customers!

Your Modern Collaboration Landscape

There are many ways people collaborate within your organisation. You may or may not enjoy the fruits of that collaboration. Does your current collaboration landscape cater for the wide variety of groups that form (organically or inorganically) to build relationships and develop your business?

Moving to the cloud is a catalyst for re-evaluating your collaboration solutions and their value. Platforms like Office 365 are underpinned by search/discovery tools that can traverse and help draw insight from the output of collaboration, including conversations and connections between people and information. Modern applications open up new opportunities to build working groups that include people form outside your organisation with whom you can freely, and securely share content.

I’ve been in many discussions with customers on how enabling technologies play a role in the modern collaborative landscape. Part of this discussion is about identifying the various group archetypes and how characteristics can align or differ. I’ve developed a view that forms these groups into three ‘tiers’, as follows:

Organisations should consider a solution for each tier, because there are requirements in each tier that are distinct. The challenge for an organisation (as part of a wider Digital Workplace strategy) is to:

  • Understand how existing and prospective solutions will meet collaboration requirements in each tier, and communicate that understanding.
  • Develop a platform where information managed in each tier can be shared with other tiers.

Let’s go into the three tiers in more detail.

Tier One (Intranet)

Most organisations I work with have an established Tier One business solution, like a corporate intranet. These are the first to mature. They are logically represented as hierarchy of containers (i.e. sites), with a mix of implicit and explicit access control (and associated auditing difficulties). The principal use is to store documents and host authored web content (such as news). Tier One systems are usually dependent on solutions in other tiers to facilitate (and retain) group conversations or discussions. 

  • Working groups are hierarchical and long term, based off a need to model the relationships between groups in an organisation (e.g. Payroll sits under Finance, Auditing sits under Payroll )
  • Activity here is closed and formal. Contribution is restricted to smaller groups.
  • Information is one-way and top down. Content is authored and published for group-wide or organisation-wide consumption.
  • To get things done, users will be more dependent on a Service Desk (for example: managing access control, provisioning new containers), at the cost of agility.
  • Groups are established here to work towards a business outcome or goal (deliver a project, achieve our organisations objectives for 2019).

Tier Three (Social Network):

Tier Three business solutions represent your organisation’s social network. Maturity here ranges from “We launched [insert platform here] and no-one is using it ” to “We’ve seen an explosion in adoption and it’s Giphy city . They are usually dependent on solutions in other tiers to provide capabilities such as web content/document management (case in point: O365 Groups and Yammer).

  • Tier Three groups here are flattened, and cannot by design model a hierarchy. They tend to be long term, and prone to stagnation.
  • Groups represent communities, capabilities and similar interest groups, all of which are of value to your organisation. At this point you say: “I understand how the ‘Welcome to Yammer’ group is valuable, but what about the ‘Love Island Therapy’ group?”. At this point I say: “Here you have a collection of individuals who are proactively using and adopting your platform”.
  • Unlike in the other tiers, groups here tend to have no business outcome, although they’ll have objectives to gain momentum and visibility.
  • Collaboration here is open (public) and informal, down to the #topics people discuss and the language that is used.
  • A good Tier Three solution will be fully self service, subject to a pre-defined usage policy. There should be no restrictions beyond group level moderation in terms of who can contribute. If it’s public or green it’s fair game!
  • Tier Three groups have the biggest membership, and can support thousands of members.

Tier Two (Workspaces)

Tier Two comes last, because in my experience it’s the capability that is the least developed in organisations I work with and the last to mature.

A Tier Two business solution delivers a collaborative area for teams such as working groups, committees and project teams. They will provide a combination of features inherent in Tier One and Tier Three solutions. For example, the chat/discussion capabilities of a Tier Three solution and the content management capabilities of a Tier One solution

  • Tier Two groups here are flattened, and cannot by design model a hierarchy. They tend to me short term, in place to support a timeboxed initiative or activity.
  • Groups represent working groups, committees and project teams, with a need to create content and converse. These groups are coalitions, including representation from different organisational groups that need to come together to deliver an outcome.
  • Groups work towards a business outcome, for example: develop a business case, deliver a document.
  • Collaboration here tends to be closed (restricted to a small group) and semi-formal, but it is possible for such groups to be both closed, formal and open, informal.
  • A good Tier Two solution will be fully self service, subject to a pre-defined usage policy. There should be no restrictions beyond group level moderation in terms of who can contribute.
  • Groups represent a small number of individuals, and do not grow to the size of departmental (Tier One) groups or social (Tier Three) groups.

The three-tiers view identifies the different ways collaboration happens with in your organisation. It is solution agnostic, you can advocate any technology in any tier if it meets the requirement. The view helps evaluate the diverse needs of your organisation, and determine how effective your current solutions are at meeting requirements for collaboration and information working.

Agile Teams to High Performing Machines

Agile teams are often under scrutiny as they find their feet and as their sponsors and stakeholders realign expectations. Teams can struggle due to many reasons. I won’t list them here, you’ll find many root causes online and may have a few yourself.

Accordingly, this article is for Scrum Masters, Delivery Managers or Project Managers who may work to help turn struggling teams into high performing machines. The key to success here is measures, measures and measures. 

I have a technique I use to performance manage agile teams involving specific Key Performance Indicators (KPIs). To date it’s worked rather well. My overall approach is as follows:

  • Present the KPIs to the team and rationalise them. Ensure you have the team buy-in.
  • Have the team initially set targets against each KPI. It’s OK to be conservative. Goals should be achievable in the current state and subsequently improved upon.
  • Each sprint, issue a mid-sprint report, detailing how the team is tracking against KPIs. Use On Target and Warning, respectively to indicate where the team has to up it’s game.
  • Provide a KPI de-brief as part of the retrospective. Provide insight into why and KPIs were not satisfied.
  • Work with the team on setting the KPIs for the next sprint at the retrospective

I use a total of five KPIs, as follows:

  • Total team hours worked (logged) in scrum
  • Total [business] value delivered vs projected
  • Estimation variance (accuracy in estimation)
  • Scope vs Baseline (effectiveness in managing workload/scope)
  • Micro-Velocity (business value the team can generate in one hour)

I’ve provided a Agile Team Performance Tracker for you to use that tracks some of the data required to use these measures. Here’s an example dashboard you can build using the tracker (click to enlarge):

In this article, I’d like to cover some of these measures in detail, including how tracking these measures can start to affect positive change in team performance. These measures have served me well and help to provide clarity to those involved.

Estimation Variance

Estimation variance is a measure I use to track estimation efficiency over time. It relies on the team providing hours based estimates for work items but is attributable to your points based estimates. As a team matures and gets used to estimation. I expect the time invested to more accurately reflect what was estimated.

I define this KPI as a +/-X% value.

So for example, if the current Estimation Variance is +/-20%, it means the target for team hourly estimates, on average for work items in this sprint, should be tracking no more than 20% above or below logged hours for those work items. I calculate the estimation variance as follows:

[estimation variance] = ( ([estimated time] / [actual time]) / [estimated time] ) x 100

If the value is less than the negative threshold, it means the team is under-estimating. If the value is more than the positive threshold, it means the team is over-estimating. Either way, if you’re outside the threshold, it’s bad news.

“But why is over-estimating an issue?” you may ask yourself? “An estimate is just an estimate. The team can simply move more items from the backlog into the sprint“. Remember that estimates are used as baselining for future estimates and planning activities. A lack of discipline in this area may impede release dates for your epics.

You can use this measure against each of the points tiers your team uses. For example:

In this example, the team is under-estimating bigger ticket items (5’s and 8’s), so targeted efforts can be made next estimation session to bring this within target threshold. Overall though in this example the team is tracking pretty well – the overall variance of -4.30% could well be within target KPI for this sprint.

Scope vs Baseline

Scope vs Baseline is a measure used to assess the team’s effectiveness at managing scope. Let’s consider the following 9-day sprint burndown:

The baseline represents the blue line. This is the projected burn-down based on the scope locked in at the start of the sprint. The scope is the orange line, representing the total scope yet to be delivered on each day of the sprint.

Obviously, a strong team tracks against or below the baseline, and will take on additional scope to stay aligned to the baseline without falling too far below it. Teams that overcommit/underdeliver will ‘flatline (not burn down) and track above the baseline, and even worse may increase scope when tracking above the baseline.

The Scope vs Baseline measure is tracked daily, with KPI calculation an average across all days in the sprint.

I define this KPI as a +/-X% value.

So for example, if the current Scope vs Baseline is +/-10%, it means the actual should not track on average more than 10% above or below the baseline. I calculate the estimation variance as follows:

[scope vs baseline] = ( [actual / projected] * 100 ) – 100

Here’s an example based on the burndown chart above:

The variance column stores the value for the daily calculation. The result is the Scope vs Baseline KPI (+4.89%). We see the value ramp up into the positive towards the end of sprint, representing our team’s challenge closing out it’s last work items. We also see the team tracking at -60% below the baseline on day 5, which subsequently triggers a scope increase to track against the baseline – a behaviour indicative of a good performing team.

Micro-Velocity

Velocity is the most well known measure. If it goes up, the team is well oiled and delivering business value. If it goes down, it’s the by-product of team attrition, communication breakdowns or other distractions.

Velocity is a relative measure, so whether it’s good or bad depends on the team and the measures taken in past sprints.

What I do is create a variation on the velocity measure that is defined as follows:

[micro velocity] = SUM([points done]) / SUM([hours worked])

I use a daily calculation of macro-velocity (vs past iterations) to determine the impact team attrition and on-boarding new users will have within a single sprint.


In conclusion, using some measures as KPIs on top of (but dependent on) the reports provided by the likes of Jira and Visual Studio Online can really help a team to make informed decisions on how to continuously improve. Hopefully some of these measures may be useful to you.

5 Tips: Designing Better Bots

Around about now many of you will be in discussions internally or with your partners on chatbots and their applications.

The design process for any bot distils a business process and associated outcome into a dialog. This is a series of interactions between the user and the bot where information is exchanged. The bot must deliver that outcome expediently, seeking clarifications where necessary.

I’ve been involved in many workshops with customers to elicit and evaluate business processes that could be improved through the use of bots. I like to advocate a low risk, cost effective and expedient proof of concept, prior to a commitment to full scale development of a solution. Show, rather than tell, if you will.

With that in mind, I present to you my list of five house rules or principles to consider when deciding if a bot can help improve a business process:

1. A bot can’t access information that is not available to your organisation

Many bots start out life as a proof of concept, or an experiment. Time and resources will be limited at first. You want to prove the concept expediently and with agility. You’ll want to avoid blowing the scope in order to stand up new data sources or staging areas for data.

As you elaborate on the requirements, as yourself where the data is coming from and how it is currently aggregated or modified in order to satisfy the use case. Your initial prototype may well be constrained by the data sources currently in place within your organisation (and accessibility to those data sources).

Ask the question “Where is this information at rest?”, “How do you access it?”, “It is manually modified?”.

2. Don’t ask the for information the user doesn’t know or has to go and look up 

Think carefully – does the bot really need to seek clarification? Let’s consider the following example:

ken_10

In practice, you’re forcing the user to sign in here to some system or dig around their inbox and copy/paste a unique identifier. I’ve yet to meet anyone who has the capacity to memorise things like their service ticket reference numbers. You can design smarter. For example:

  1. Assume the user is looking for the last record they created (you can use your existing data to determine if this is likely)
  2. Show them their records. Get them to pick one.
  3. Use the dialog flow to retain the context of a specific record

By all means, have the bot accommodate for scenarios where user does provide a reference number. Remember, your goal is to reduce time to the business outcome and eliminate menial activity. (Trust me. Looking up stuff in one system to use in another system is menial activity).

3. Let’s be clear – generic internet keyword searches are an exception

ken_9

When Siri says ‘here’s what I found on the Internet’, it’s catching an exception; a fall-back option because it’s not been able to field your query. It’s far better than ‘sorry, I can’t help you’. A generic internet/intranet keyword search should never be your primary use case. Search and discovery activity is key to a bot’s value proposition, but these functions should be underpinned by a service fabric that targets (and aggregates) specific organisational data sources. You need to search the internet? Please, go use Bing or Google.

4. Small result sets please

As soon as a chat-bot has to render more than 5 records in one response, I consider this an auto-fail.

Challenge any assertion that a user would want to see a list of more than 5 results, and de-couple the need to summarise data from the need to access individual records. Your bot needs to respond quickly, so avoid expensive queries for data and large resulting data sets that need to be cached somewhere. For example:

ken_12.PNG

In this example, the bot provides a summary which enough information to provide the user with an option to take further action (do you want to escalate this?). It also informs the most appropriate criteria for the next user driven search/discovery action (reports that are waiting on me).

Result sets may return tens, hundreds or thousands of records but the user inevitably narrows this down to one, so the question is “how do you get from X results down to 1”.

Work with the design principal that the bot should apply a set of criteria that returns the ‘X most likely’. Use default criteria based on the most common filtering scenario but allow the user to re-define that criteria. For example:

ken_8

5. Don’t remove the people element

Remember a bot should eliminate the menial work a person does, not substitute the person. If you’re thinking of designing a bot to substitute or impersonate a person, think again.

ken_11.PNG

No one wants to do menial work, and I’d hedge my bets that there is no one in your organisation who’s workload is 100% menial. Those that do menial work would much rather re-focus efforts on more productive and rewarding endeavours.

Adding a few Easter Eggs to your bot (i.e. human-like responses) is a nice to have. Discovery and resulting word of mouth can assist with adoption.

Consider whether your process involves the need to connect with a person (either as a primary use case, or edge case). If this is the case, ensure you can expedite the process. Don’t simply serve up a contact number. Instead, consider how to connect the user directly (via tools like Skype, if they are ‘online’). Alternatively, allow the user request a call-back.

ken_13.PNG


Remember, there are requirements bots and agents cannot satisfy. Define some core principles or house rules with your prospective product owners and test their ideas against them. It can lead you to high value business solutions.