Global Power Platform Bootcamp 2020, Pune

This Technical Conference was sponsored by Rapid Circle and Muhimbi , backed by Microsoft as a part of Global Power Platform boot camp.

EQPTcEoUcAEgQg-

Session Agenda –

  • Deep Dive into the JSON function in Power Apps
  • Export Multiple Media Controls in Power Apps to PDF

Speakers –

Demo’s –

Image to PDF –

Picture1

Speech to Text –

Picture2

Sessions Snips –

600x200
Explaining the Architectural diagram for Converting Audio from Microphone control of PowerApps to Text

Awards and Recognition –

SLide1

Design a Canvas App with The Camera Control to capture Images for Identification/Recognition

This post is part 1 of a 2 part series on Identifying/Recognizing Images captured in Camera control of PowerApps.

In this article, we will capture an Image using the Camera control of PowerApps and then pass it to our Azure Cognitive service for Image Identification/Recognition hosted in Azure environment

This is an advanced topic related to a business scenario since it effectively allows a Power User to consume the Custom Vision API in Azure Cognitive services for Identifying/Recognizing Images.

To reduce the complexity, we will divide this article in four parts:

    1. Design a Canvas App with The Camera Control to capture Image.
    2. Power Automate (Flow) solution to Identify/Recognize an Image

 

Prerequisites-

Before you begin, please make sure the following prerequisites are in place:

  • An Office 365 subscription with access to PowerApps and Power Automate (Flow).
  • An Azure Subscription to host an Image recognition cognitive service.
  • Power Automate Plan for consuming the HTTP Custom actions.
  • Appropriate privileges to create Power Apps and Power Automate(Flow).
  • Working knowledge of both Power Apps and Power Automate(Flow).

 

Designing a Canvas App with The Camera Control to Capture Image

Step 1- Creating the basic structure of The Canvas App

  • Go to powerapps.com, sign in with your work or school account, click on the Apps menu in the left navigation bar, and then click on ‘+Create’ and Select Canvas app from blank.
  • Specify the Name and the Format of the APP and Click on ‘Create’.
60
  • Add a Camera control as below.

cameraimage

  • Now add a Button to the Canvas as shown below.

105

  • Next, rename the Button to ‘Submit’ as shown.

70

 

  • Now that we have the outer body ready, let’s go ahead and configure our components with formulas.

 

Step 2-  Configuring the components with Formula’s

  • We will first configure a collection called “collectphoto” to add a captured Image.
  • Then,we’ll create variable ‘JSONSample’ and set it with the JSON of the Image.
  • Select the ‘OnSelect’ property of the Camera and add the following formula to it:

ClearCollect(collectphoto,Camera2.Photo);Set(JSONSample,JSON(collectphoto,JSONFormat.IncludeBinaryData));

Fomulae

 

  • Now it’s time to add a Power Automate (Flow) to our Power Apps.
  • Inside the Action menu, there is an option to add ‘Power Automate’ to your existing Power Apps. To do this, click on the ‘Power Automate’ option as highlighted.

15

  • Then, click on ‘Create a new Flow’ as shown below.

105

  • Rename the Power Automate (Flow) to “AIImageReCogService” and add ‘PowerApps’ as a trigger .
  • Once that has been done, add a ‘Compose Action’and select ‘CreateFile_FileContent’ from the Dynamic content in the pane on the right side of the image below.
  • Make sure you click on Save.

AIImageRecog

Note – We completed these steps in the Power Automate (Flow) just so that we can get the Power Automate (Flow) added to our Power Apps. Later in this article,  we will add more actions to the Power Automate (Flow), so as to carry out a Image Recognition.

  • Finally select the ‘OnSelect’ property of the ‘Submit’ button and add the following formula.

AIImageReCogService.Run(First(collectphoto).Url)

  • The above formula tells the ‘Submit’ button to trigger a Power Automate (Flow) with name ‘AIImageReCogService’ created earlier using the .Run() method in which we are passing the JSON of the first image sample sitting in the Image Collection, ‘collectphoto‘.
  • Now that we have our Power App ready, let’s head towards configuring the rest of the Power Automate solution.

Global AI On Tour 2020

Yash Kamdar

Session Agenda –

  • Introduction to Microsoft Azure Cognitive Services
  • Text Analytic API
  • LUIS
  • Vision API
  • Content Moderation API

 

Speakers –

 

Demo’s –

 

LUIS-

Diag

 

Vision –

Diagram

 

Text Analytics –

output-onlinejpgtools

 

Content Moderator –

ContentModerator

 

Other related blog posts –

Create a LUIS application for Smart Email Management

I have often wondered,  What if there was a way for us to talk to our systems in our own Natural Language.  What if our applications were able to Interpret and Understand our Language and then carry out Predefined tasks in an Intelligent manner by following a strict Prediction Model.

Well no more wondering as Microsoft has introduced “Language Understanding Intelligent Service” – LUIS.

LUIS is a cloud-based service to which you can apply customized machine learning capabilties by passing in Utterances (natural language) to predict overall Intent, and thereby pull out relevant detailed information.

 

In this article, we are taking a real world scenario where a Support team is trying to implement LUIS on a Common Shared Mailbox so that the Intelligent Service can read the Message Body of each email and based on the Prediction Model understand the correct Microsoft Team Channel where the email needs to be assigned.

Diag

 

To reduce the complexity, we will divide this article in two parts:

  1. Design and Train our LUIS Application
  2. Create a Power Automate solution for Implementing Smart Email Management based on LUIS Predictions.

 

Prerequisites-

Before you begin, please make sure the following prerequisites are in place:

 

 

Step 1 – Building the LUIS Application

  • Sign in to LUIS portal .
  • Once successfully signed in, on the page that appears select ‘+ New app for conversation’ as shown in the image below.

Add

 

  • A pop up form appears where you need to fill in the basic details for the LUIS application like ‘Name‘, ‘Culture‘ basically the language which the LUIS application should expect and the ‘Description‘.
  • Once this information is filled up, Click on ‘Done‘.

App2

 

Step 2 – Utterances and Intents ??

We now proceed by creating Intents for our LUIS application but wait what exactly is an Intent you ask ???

  • An Intent represents an action that the user wants to perform.
  • If you remember the image we saw a couple of minutes before, the intent here is to classify emails. That’s it, let’s keep it simple. So when I say classify I need to know the categories for the classification right !! These categories will be our Intents.

 

  • We need to assign emails to one of the three categories which are our Intents, namely –
    • OnPremise team
    • Online team
    • Sales team

 

Step 2.1 Creating Intents

  • Picking up where we left, once your LUIS application has been created you will be navigated to a page where you will see an option called as ‘Intent‘ in the left navigation.
  • Select the ‘Intents‘ option and click on ‘+ Create‘ as shown in the image below-

Intent1

  • On the Pop up box that opens up enter the correct ‘Intent name‘ and click on ‘Done‘.
  • Do this for ‘Ticket_OnPremise‘, ‘Ticket_Online‘ and ‘Sales‘.

TicketOP

 

Online

 

Sales

 

Step 2.2 – Creating Utterances

  • Utterances are inputs from the user or a system that your LUIS application will receive.
  • The LUIS application needs to understand and interpret these utterances to extract intents and entities from them, and so it is extremely important to capture a variety of different example Utterances for each intent.
  • Basically you need to type in the Utterances i.e the expected words that your users will normally be writing in the email messages being received by your shared mailbox.
  • Navigate to the ‘Intents’ that we have created in Step 2.1 and start writing Utterances as shown in the image below.

SampleUtterance

 

  • If you have closely observed the image above, I have written an Utterance

How do I decide the no. of Application, Web Front end and Search servers needed to be configured in my SharePoint 2019 environment

  • Once you write an Utterance and press Enter, LUIS starts breaking the Utterance and keeps a track of keywords inside the Utterance.
  • The more no. of times a particular word starts appearing in the sample Utterance the more confident the LUIS becomes in predicting the Intent and thus higher the Prediction score for a particular intent.

 

  • Please take a look at the sample Utterances across Intents that I have configured for our LUIS application.

 

Ticket_OnPremise Utterances –

Score

 

Ticket_Online Utterances –

OnlineUt

 

Sales Utterances – 

SalesUT

  • Now that you have seen my sample Utterances, let’s go ahead and Train our LUIS application.
  • But WAIT !! Did you notice that in all the images above the ‘TRAIN‘ button at the top is showing a red color.
  • That is basically an intimation from the LUIS application for you that you have Untrained utterances registered against Intents in your LUIS application.

 

Step 3 – Train the LUIS application

  • Now that we have built up the basic structure of our LUIS application let us go ahead and train it. We have already been receiving intimations from the LUIS application that it has untrained utterances across intents present with it.
  • Just navigate to the top of the page and hit the ‘Train‘ button.
  • The LUIS application will start training itself and show you notification stating that it is collection the training data as shown in the image below-

Train

  • Sit back and relax, it will take some time for the LUIS application to train itself.
  • Once the training is finished, the LUIS application will notify you that the training is completed.
  • Now it is time to test our LUIS application before we go ahead and Publish it.

 

Step 4 – Test the LUIS application

  • Click on the ‘Test’ button from the top navigation and it opens up a test environment for us as shown in the image below.
  • Here what we can do is type sample utterances once again and see if the LUIS applications (after training) is able to predict the Intents correctly.

Train

  • Let’s for example type a sample utterance and hit Enter –

One of the actions in my Power Automate solution keeps failing

  • As you can see in the image below, LUIS quickly runs the test utterance and posts a result. It has correctly predicted that the correct intent is ‘Ticket_Online’ which is also the ‘Top-scoring Intent‘ with a score of 0.483 which is the highest still a poor confidence right now because this is just our first test.
  • You need to keep training the LUIS app with more and more utterances so that it’s confidence keeps increasing.

Yo

 

  • Let’s go ahead and test another utterance and see if this time the confidence i.e ‘Intent Score‘ increases or not.

test2

  • There you go !!! If you observe this time the ‘Top-Scoring Intent’ has a score of 0.723 which simply means that the LUIS application is more confident not since the last utterance about the intent.
  • So basically the more utterances are passed, the more the LUIS application will become intelligent.

 

Step 5 – Publish the LUIS application

  • That’s it, we are done here.
  • If you think about it, now that you know the basics it is so easy to go ahead and configure a LUIS application which at the start may seem like a daunting task.
  • Just navigate to the top of your screen and click on the ‘Publish’ button.
  • A pop up form opens up asking for the Slot in which the LUIS application needs to be published, just select Production and click on Done.

publish

 

Next we will be creating a Power Automate solution to grab the ‘Prediction‘ and in turn the ‘CorrectIntent‘ exposed by the ‘LUIS application‘, based on which we will Automate Decision Making.

Power Automate (Flow) solution to Identify/Recognize an Image

This post is part 2 of a 2 part series on Image Identification/Recognition using Azure Cognitive Services.

In our previous post we created a Canvas application so as to capture an Image using the Camera control of PowerApps and then pass it to our Azure Cognitive service for Image Identification/Recognition hosted in Azure environment

To reduce the complexity, we have divided this article in two parts:

  1. Design a Canvas App with The Camera Control to capture Images for Identification/Recognition
  2. Power Automate (Flow) solution to Identify/Recognize an Image

Prerequisites-

Before you begin, please make sure the following prerequisites are in place:

  • An Office 365 subscription with access to PowerApps and Power Automate (Flow).
  • An Azure Subscription to host an Image recognition cognitive service.
  • Power Automate Plan for consuming the HTTP Custom actions.
  • Appropriate privileges to create Power Apps and Power Automate(Flow).
  • Working knowledge of both Power Apps and Power Automate(Flow).

 

Now that the prerequisites are in place, Go back to the “Image RecognitionPower Automate (Flow) that we started creating earlier in the step 2 of this blog.

 

Step 3 – Create File

  • For the ‘Site Address’ as shown in the image below, enter the correct site address from the drop down menu where you intend to create the image file against the image captured in Camera control of PowerApps.
  • For the ‘Folder Path‘, select the appropriate Library using the Folder menu on the right hand side.
  • For the ‘File Name‘, give a name as per your choice but make sure you do not forget to add the ‘.jpg‘ extension.
  • For the ‘File content‘ as shown in the image below, Navigate to Add to Dynamic content line and select ‘Outputs‘ option available inside the ‘Compose‘ action.

Create file

 

Step 4 – HTTP Trigger

  • Insert an ‘HTTP‘ action and select the ‘Post‘ method from the drop down menu.
  • For the ‘URI‘ as shown in the image below, enter the below mentioned URI where we will be making a simple call to our API i.e the Image Recognition Cognitive Service hosted in the Azure environment.

https://westcentralus.api.cognitive.microsoft.com/vision/v2.0/analyze?visualFeatures=%5B“Categories”,”Adult”,”Tags”,”Description”,”Faces”,”Color”,”ImageType”,”Objects”,”Brands”]&language=en

  • For the ‘Headers‘ as shown in the image below, you need to enter the ‘Ocp-Apim-Subscription-key‘ as well as the ‘Content-type‘.
  • You can obtain the ‘Ocp-Apim-Subscription-key‘ from the Azure environment which is basically the ‘Subscription key‘ that you obtain once you have configured the Cognitive service.
  • For the ‘Queries‘ as shown in the image below, enter the value as ‘Description,Tags‘ against the ‘visualfeatures‘.
  • For the ‘Body‘ field, Navigate to ‘Add dynamic content‘ line and choose value ‘Outputs‘ available under the ‘Compose‘ action.

HTTP

 

Step 5 – Parse JSON

  • Next we need to parse the response obtained from The Cognitive Services API i.e the HTTP post request in order to extract the identified/recognised text.
  • For the ‘Content‘ as shown in the image below, Navigate to the Add dynamic content line and choose ‘Body‘ option available inside ‘HTTP‘ action.
  • For the ‘Schema‘ please use the payload as below-
{
    “type”: “object”,
    “properties”: {
        “tags”: {
            “type”: “array”,
            “items”: {
                “type”: “object”,
                “properties”: {
                    “name”: {
                        “type”: “string”
                    },
                    “confidence”: {
                        “type”: “number”
                    }
                },
                “required”: [
                    “name”,
                    “confidence”
                ]
            }
        },
        “description”: {
            “type”: “object”,
            “properties”: {
                “tags”: {
                    “type”: “array”,
                    “items”: {
                        “type”: “string”
                    }
                },
                “captions”: {
                    “type”: “array”,
                    “items”: {
                        “type”: “object”,
                        “properties”: {
                            “text”: {
                                “type”: “string”
                            },
                            “confidence”: {
                                “type”: “number”
                            }
                        },
                        “required”: [
                            “text”,
                            “confidence”
                        ]
                    }
                }
            }
        },
        “requestId”: {
            “type”: “string”
        },
        “metadata”: {
            “type”: “object”,
            “properties”: {
                “width”: {
                    “type”: “integer”
                },
                “height”: {
                    “type”: “integer”
                },
                “format”: {
                    “type”: “string”
                }
            }
        }
    }
}

ParseJP

 

Step 6 – Initialize Variable

  • Add an ‘Initialize Variable‘ action and for the ‘Name‘ as shown in the image below, give a meaningful name like ‘OutputText‘.
  • For the ‘Type‘, select ‘String‘ from the drop down menu.

InitVar

 

Step 7 – Apply to each

  • The ‘Parse JSON‘ action will parse the output of the Image Recognition Cognitive Services API.
  • We will add an Apply to each conditional logic to loop through all the available properties parsed by our Parse JSON action.
  • The property that preserves the recognized text is ‘Caption‘ which is a property native to and spit by our Image Recognition Cognitive Service hosted in Azure.
  • In the ‘Select an output from previous step‘ as shown in the image below, Navigate to Add dynamic content line and choose ‘Captions‘ available under the ‘Parse JSON‘ action.
  • Next we will set the variable ‘OutputText‘ initialized in the previous steps with the ‘text‘ property inside the Captions object that has the final identified/recognised text.
  • Add a ‘Set Variable‘ action and for the ‘Name‘ select the correct variable i.e ‘OutputText‘ from the drop down menu.
  • For the ‘Value‘ as shown in the image below, Navigate to Add dynamic content line and choose option ‘Text‘ available inside ‘ParseJSON‘ action.

Apply

 

Step 8 – Update File properties

  • Now that we have the identified/recognized text and safely preserved it in our OutputText variable, let us go ahead and update the Image file created in SharePoint with the Identified/Recognized text.
  • I have simply created a column in my SharePoint library where we create the captured image from PowerApp with the name of the column being ‘RecogText‘.
  • Add an ‘Update file properties‘ action and for the ‘Site Address‘ as shown in the image below, select the correct site address.
  • For the ‘Library Name‘ select the correct library name from the drop down menu where we create our image.
  • For the ‘Id‘ field, Navigate to Add dynamic content line and select ‘ItemId‘ option available under the ‘Create file‘ action.
  • For the ‘RecogText‘ field available once you enter the correct site address and the library name, Navigate to Add dynamic content line and select OutputText available under the list of variables.

Update

 

That’s all that is needed to do.

Let us now go ahead and check if our Power Automate solution is indeed working as per our expectation and spitting the right text against the identified image.

 

Photo captured in Camera control of PowerApps-

CapturedImage

 

Image Created in SharePoint Library and Recognized Text –

Output

Create a Power Automate solution for Implementing Smart Email Management based on LUIS Predictions

This post is part 2 of a 2 part series on Creating a LUIS application for Smart Email Management.

In our previous article, we built and deployed a LUIS application, trained it by passing in example ‘Utterances‘ for created ‘Intents‘ and performed tests so as to check if the LUIS is able to provide correct ‘IntentScores‘ for the test ‘Utterances‘.

 

To reduce the complexity, we will divide this article in two parts:

  1. Design and train our LUIS application
  2. Power Automate solution for Implementing Smart Email Management based on LUIS Predictions.

 

In this article, we will create a Power Automate solution to grab the ‘Prediction‘ and in turn the ‘CorrectIntent‘ exposed by the ‘LUIS application‘, based on which we will Automate Decision Making.

Our Primary Goal behind this solution is to implement smart sorting of emails (by passing the email body to LUIS) that are received/dropped in a Shared Mailbox folder and thereby automate assignment of those emails to correct Microsoft Team channels based on ‘Intent‘ and ‘Prediction Score‘.

 

Prerequisites-

Before you begin, please make sure the following prerequisites are in place:

 

Let’s start configuring our Power Automate (Flow) solution then.

Step 1 – When a new email arrives

  • For the ‘Folder‘ as shown in the image below, select the appropriate folder of the mailbox using the ‘Folder menu‘ present in the right hand side.
  • The ‘Inbox‘ folder of the mailbox selected, as shown in the image below, is the same folder where all emails will be received/dropped before we go ahead and implement smart sorting using LUIS predictions.

Trigger

 

Step 2 – Get Prediction

  • For the ‘App Id‘ as shown in the image below, click on the drop down menu and from the list of LUIS applications, select the correct LUIS application present in your subscription.
  • For the ‘Utterance text‘, Navigate to the ‘Add dynamic content‘ line and choose ‘Subject‘ or ‘Body Preview‘ present inside ‘When a new email arrives‘ action.

GetPrediction

 

Step 3 – Initialize Variable

  • Next, Add an ‘Initialize Variable‘ action and for the ‘Name‘ as shown in the image below, enter ‘CorrectIntent‘.
  • For the ‘Type‘, select ‘String‘ from the drop down menu.

InitVar

 

Step 4 – Parse JSON

  • For the ‘Content‘ as shown in the image below, Navigate to the ‘Add dynamic content‘ line and choose LUIS Prediction available under the ‘Get Prediction‘ action.
  • For the ‘Schema‘, please enter the following schema-
{
    “type”: “object”,
    “properties”: {
        “query”: {
            “type”: “string”
        },
        “topScoringIntent”: {
            “type”: “object”,
            “properties”: {
                “intent”: {
                    “type”: “string”
                },
                “score”: {
                    “type”: “number”
                }
            }
        },
        “intents”: {
            “type”: “array”,
            “items”: {
                “type”: “object”,
                “properties”: {
                    “intent”: {
                        “type”: “string”
                    },
                    “score”: {
                        “type”: “number”
                    }
                },
                “required”: [
                    “intent”,
                    “score”
                ]
            }
        },
        “entities”: {
            “type”: “array”,
            “items”: {
                “type”: “object”,
                “properties”: {
                    “entity”: {
                        “type”: “string”
                    },
                    “type”: {
                        “type”: “string”
                    },
                    “startIndex”: {
                        “type”: “integer”
                    },
                    “endIndex”: {
                        “type”: “integer”
                    },
                    “score”: {
                        “type”: “number”
                    }
                },
                “required”: [
                    “entity”,
                    “type”,
                    “startIndex”,
                    “endIndex”,
                    “score”
                ]
            }
        }
    }
}

ParseJP

 

 

Step 5 – Compose

  • Next, Add a ‘Compose‘ action and for the ‘Inputs‘ as shown in the image below, Navigate to ‘Add dynamic content‘ line and choose ‘Intent‘ option available under ‘Parse JSON‘ action.

Compose

 

Step 6 – Set Variable

  • Now we will set the ‘CorrectIntent‘ variable created above.
  • For the ‘Name‘ as shown in the image below, select the variable ‘CorrectIntent‘ from the drop down menu.
  • For the ‘Value‘ as shown in the image below, Navigate to the ‘Add dynamic content line‘ and choose ‘Outputs‘ option available under the ‘Compose‘ action.

SetVar

 

Step 7 – Switch case

  • Next, add a ‘Switch‘ conditional action.
  • For the ‘On‘ field as shown in the image below, Navigate to the ‘Add dynamic content‘ line and select ‘CorrectIntent‘ available inside the ‘Variables‘ section.
  • Basically what we are doing here is matching the CorrectIntent i.e the Intent that has scored the highest among all the intents based on the pre-trained prediction model.
  • When the ‘CorrectIntent‘ matches the ‘Equals‘ parameter it will switch to that particular case.
  • If you closely observe the ‘Cases‘, the names for each Case is same as that of the Intents created in the LUIS application that we deployed earlier in part 1.
  • Since we want to assign email message to a particular Microsoft team channel based on the LUIS Predictions and CorrectIntent let’s go ahead and create some channels.
  • As shown in the image below, add ‘Post a message‘ action inside each of the Cases.
  • For the ‘Team‘, select the correct Team from the drop down menu. Similarly for the ‘Channel‘ select the appropriate team channel from the drop down menu.
  • In the ‘Message‘ field inside the ‘Post a message‘ action as shown in the image below, configure a message and navigate to the ‘Add dynamic content‘ line and choose ‘Utterance text‘ available inside the Get Prediction action so that the message present in the received email gets copied.

Post

 

Now that we have configured the Power Automate solution, let’s forward an email message to our Shared Mailbox folder and check if the LUIS application is indeed able to correctly predict the Intent and based on CorrectIntent if the message gets posted in the correct Microsoft Team channel.

Email sent to the Inbox folder of the Shared Mailbox-

Email

 

Scores for different Intents as predicted by LUIS and Top scoring Intent-

IntentScoring

TopScore

 

Email message smart sort and uploaded as a message in correct Microsoft Team channel-

FinalOutput