Docker – No Time to Die

Recently there has been a lot of fuss around the mention of Kubernetes stopping the support for Docker as a container run time in its release notes.

Actually the news is not that big a disaster, although the headlines can sometimes paint an entirely different picture altogether. So let’s try to dig a little deeper and understand the impact of Kubernetes deprecating Docker as a container runtime after v1.20

Terminologies

What is a Container runtime ?

The standard definition is that, “A container runtime is software that executes containers and manages container images on a node. Today, the most widely known container runtime is Docker, but there are other container runtimes in the ecosystem, such as rkt, containerd, and lxd”.


What is a Container ?

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another


What is an Image ?

An image is a read-only template that contains a set of instructions for creating a container that can run on the container platform like Docker.


Now if you read the above 3 terminologies you will understand that a Container runtime, a Container and an Image are all different. Kubernetes says that they are going to deprecate Docker as a Container runtime, which simply means that Docker produced images will continue to work in your clusters.

The problem is whenever the word Docker is used it is always looked upon as a container. Most people did not relate to the statement correctly because they simply read that Kubernetes is deprecating Docker.

The job of a container runtime as per the definition is to simply execute the containers and manage them. Docker has always been a popular choice for container runtime and it being deprecated as a container run time is not the end of the world for the Docker, hence I suggest you read the title once again DOCKER – NO TIME TO DIE.

In the image above you can see that it is clearly stated that support for container images built with docker tools is not being deprecated and this is something that is a part of the Official documentation from Docker.

So I repeat, You can still use Docker to create containers for Kubernetes. What changes is the fact that you will now be using containerd or CRI-O as container runtimes to run them in Kubernetes.

If you are trying to ramp up your own cluster, you need to make sure that you are not using Docker any more as a run time container. If you still go ahead with it then you will get a nice fancy depreciation warning with the v1.20 release. 

Well I hope this blog post helped clear things out a little bit. If you have any queries or doubts around this you are always welcome to comment and I would love to have a conversation around it.

Keep checking this blog for exciting new articles about Kubernetes, AWS and Azure Cloud, Power Automate, SharePoint Online, Power Apps, as well as document conversion and manipulation using The Muhimbi PDF Converter.

A Guide to Deploying your Application on EKS using AWS CLI – Part 1

When we talk about moving the K8s to one of the Cloud platforms (Azure, Amazon, Google) etc. there are a no. of deployment scenarios that we have in front of us with regards to how we want to set up Kubernetes.

Mentioned below are the scenarios via which we can move forward with K8s on Amazon cloud platform.

  1. Kubernetes on the Cloud Machines
  2. Managed Kubernetes

With Kubernetes on the cloud machines, you will need to install, manage and administer your own Kubernetes infrastructure. However with the Managed Kubernetes, the Cloud provider will take care of setting up, managing and maintaining your Kubernetes infrastructure and you will only be concerned with using this infrastructure (Kubernetes cluster) to deploy your applications.

In this blog post, we will go ahead with the Managed Kubernetes option to deploy Kubernetes to Amazon Cloud platform, popularly knows as Amazon EKS (Elastic Kubernetes Service).

We will first set up the prerequisites that are required and then cover the following in part 2 of this blog post series.

  1. Set up a Kubernetes cluster
  2. Scale and update the PODS in the Deployment
  3. Analyze the resources deployed as a part of Amazon EKS
  4. Understand the Billing from the AWS Console for EKS
  5. Clean up the resources

Prerequisites :

  • Install and configure AWS CLI or Installing the AWS-Iam-Authenticator

Install and Configure AWS CLI

I have an EC2 Linux instance named “LinuxInstance” configured in my Amazon Cloud and I will be using this instance going forward.


Next, we download the AWS CLI version 1 bundled installer using the below mentioned cmdlet-

curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"

As you can see in the image above we have the “awscli-bundle.zip” package available in our file system. Now we go ahead and extract the files from the package.

unzip awscli-bundle.zip

Now to run the unzip cmdlet, you must first have zip and unzip installed for your instance.

NOTE- For newer versions of Linux distros such as Ubuntu 20.04 and CentOS 8, the zip and unzip utilities already come pre-installed and you are good to go.

If you do not have it use the cmdlets mentioned below-

sudo apt install zip
zip -v
sudo apt install unzip
unzip -v

Next, we run the install program. The installer installs the AWS CLI at /usr/local/aws

Post this we check if the AWS CLI version was properly configured.

sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws

aws --version

As you can see in the image above, our AWS CLI version 1 has been properly installed and configured.

Next, we run the aws configure cmdlet to set up the AWS CLI installation.

aws configure

When you enter this command, the AWS CLI prompts you for four pieces of information:

Now the Access key ID and the Secret access key only becomes available once you have configured an IAM user in the AWS portal. I have already done that and if you haven’t then you need to follow the official KB article below to set it up for yourself

KB article – Understanding and getting your AWS Credentials


Installing the AWS-Iam-Authenticator

Amazon EKS uses IAM to provide authentication to your Kubernetes cluster through the AWS IAM Authenticator for Kubernetes, so let’s begin installing it.

curl -o aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.18.9/2020-11-02/bin/linux/amd64/aws-iam-authenticator

Now, provide permissions to the binary (execute permissions)

chmod +x ./aws-iam-authenticator

Copy the binary to a folder in your $PATH and finally check if the installation works as expected.

mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$PATH:$HOME/bin

echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc

aws-iam-authenticator help

Those are the prerequisites that we will need, in the part two we will begin setting up the Kubernetes cluster and deploy our application on EKS.

Keep checking this blog for exciting new articles about Kubernetes, AWS and Azure Cloud, Power Automate, SharePoint Online, Power Apps, as well as document conversion and manipulation using The Muhimbi PDF Converter.

Convert a Table (entity) in Microsoft Dataverse (CDS) to PDF Using Muhimbi’s PDF Converter Online

In this blog post, we’ll be configuring a simple Power Automate solution to take all the records present in a table (previously known as an Entity) in Microsoft Dataverse (previously known as CDS) and convert them to PDF.

Microsoft states that Standard and custom tables within Dataverse provide a secure and cloud-based storage option for your data. Dataverse allows data to be integrated from multiple sources into a single store, which can then be used in Power Apps, Power Automate, Power BI, and Power Virtual Agents, along with data that’s already available from the Dynamics 365 applications.

Most organizations use tables in Dataverse to store data from different data sources to be used within PowerApps, but then there are situations where you want to share the data within these tables in a standard, portable format, such as PDF.

The Power Automate solution will be a scheduled solution that runs once a week, locates the table, takes all the records within that table, dynamically creates a HTML, and then converts this HTML to PDF, which you can then send to your stakeholders as an attachment via e-mail.

Prerequisites

Before we begin, please make sure the following prerequisites are in place:

  • As shown in the image above, Navigate to make.powerapps.com and on the page that comes up, in the left navigation window, you will see an option named Data.
  • Inside the Data option is another option named Tables (highlighted), which contains the default tables as well as the custom tables present in your Dataverse.
  • The custom table named SalesReport is the table where our data is stored and we’ll be converting the records inside the SalesReport table to PDF.

Here is what our table looks like in the PowerApps Dataverse –

In the image above, you will see that there are a lot of columns, some default and some custom. We do not need data from all of these columns to come up in our final PDF.

In order to exclude the extraneous data when we dynamically create the HTML for the table SalesReport, we will chop and choose only the data from the relevant columns. This way, our PDF will only include the data we want to showcase.

Here is a view of the overall structure of our Power Automate solution-

Step 1 – Trigger

  • The trigger that we will choose here is Recurrence.
  • As shown in the image below, configure the Interval option with a value “7” and frequency as “Day“.
  • This configuration will run the Power Automate solution once every 7 days.
  • Configure the Start time as 2020-10-15T04:30:00:000Z.
  • 2020-10-15 is the date in yyyy-MM-dd format.
  • T is a separator between Date and Time.
  • 04:30:00:000 is time in the HH:mm:ss.SSS format.
  • The trailing Z conveys the time zone data.

Step 2 – List Records

  • In the Entity name, as shown in the image below, click on the drop down and choose the appropriate table ‘SalesReports‘.

Step 3 – Create HTML table

  • As shown in the image below, for the *FROM field, navigate to Add dynamic content line and choose value from the list of options available inside the List records action
  • Next, we will configure the Header and the corresponding Value for the header.
  • Since we already talked earlier about only showing the relevant data in the PDF we will pick and choose the columns.
  • I have chosen the OrderDate, Unitcost, Units and TotalSalesRevenue here and for the corresponding Headers, navigate to Add dynamic content line and choose OrderDate, Unit Cost, Units and Total as options available under the List records action.

Step 4 – Convert HTML to PDF

  • For the Source URL or HTML action as shown in the image below, enter the HTML as below
<html>
<h2> Weekly Sales Report </h2>
@{body('Create_HTML_table')}       ---> Output of Create HTML table action
</html>
  • For the Page orientation choose Landscape.
  • The Authentication type here will be Anonymous.

Step 5 – Send an Email

  • For the Attachments Name -1 option as shown in the image below, navigate to Add dynamic content line and choose Base file name option available inside the Convert HTML to PDF action.
  • Do not forget to add an extension .pdf after the name or else the email will be generated with an attachment without extension.
  • For the Attachments Content as shown in the image below, navigate to Add dynamic content line and choose Processed file content option available inside the Convert HTML to PDF action.

That’s it, now we run the Flow and check whether we get the output as intended.

OUTPUTS

Keep checking this blog for exciting new articles on using The Muhimbi PDF Converter with SharePoint Online, Microsoft Flow, Power Apps and document conversion and manipulation.

Understanding the magical POD Object in K8s – Part 2

This blog post is part 2 of the blog series on Understanding the magical POD object in K8s.

To reduce the complexity, this blog series is divided into following parts-

  1. Understanding the magical POD Object in K8s – Part 1
  2. Understanding the magical POD Object in K8s – Part 2

In this blog post we will define a YAML file with the POD configurations and then use the kubectl to deploy the YAML file to the Kubernetes cluster.


STEP 1 – Create a .YAML file

We will use the VI editor to create a file with name “DeployPod.YAML” and then run the kubectl cmdlet to deploy the YAML file to Kubernetes.

touch DeployPod.yaml

As shown in the image above using the touch cmdlet we have created a yaml file.

Now we will be using the VI editor to write get into the yaml file and write the POD configurations.

vi DeployPod.yaml

The default editor that comes with the UNIX operating system is called vi (visual editor). The UNIX vi editor is a full screen editor and has two modes of operation:

  1. Command mode commands which cause action to be taken on the file, and
  2. Insert mode in which entered text is inserted into the file.

The cmdlet vi DeployPod.yaml will search for the file with the name DeployPod.yaml in the directory and if located will directly take you inside that file.

Now as explained above, the first thing to do after you get inside the DeployPod.yaml file is switch to the Insert Mode so that you can write the configurations. Pressing the “I” key takes you to the Insert Mode.

apiVersion: v1                           -> Kubernetes API version
 kind: Pod                               -> Type of Kubernetes resource                               
 metadata:                               -> Metadata for the pod
     name: mypod-python
     labels: {}
     annotations: {}
 spec:                                   -> Blueprint for the pod & container
     containers:
     - name: mypod-python
       image: python:3.6.6-stretch

Once this is done we need to save the file and exit and for that we need to first press the “Esc” key and then :wq in the bottom left cmd line area which will save the YAML and exit the VI editor.

Now let’s understand in a bit of detail how we have written the YAML.

There are only two types of structures you need to know about in YAML:

  • Lists
  • Maps

In our YAML code above, we have two values, v1 and Pod, mapped to two keys, apiVersion and kind.

You can also specify more complicated structures by creating a key that maps to another map.

In this case, we have a key, metadata, that has as its value a map with 2 more keys, name, labels and annotations.

You can use either labels or annotations to attach metadata to Kubernetes objects. Labels can be used to select objects and to find collections of objects that satisfy certain conditions. In contrast, annotations are not used to identify and select objects.


STEP 2 – Execute the YAML on the Kubernetes

Now that we have configured the YAML file, lets run the YAML file.

The first and the most important step is to validate whether the YAML file we wrote is correct or not, so for this we will perform a trial run and check the validation.

kubectl create --filename DeployPod.YAML --dry-run --validate=true

As you can see in the image above, our dry run was success and the YAML has also validated, so we are good to go ahead.

kubectl create --filename DeployPod.yaml

That’s it, there is the POD we wanted that was defined in YAML and deployed with kubectl.

We are almost nearing the end of the blog series on the magical POD object and the only remaining part that we have not yet covered is the Pod health and some improvisations. That will be something which we will cover in the next blog post i.e Part 3.

Keep checking this blog for exciting new articles about Kubernetes, AWS and Azure Cloud, Power Automate, SharePoint Online, Power Apps, as well as document conversion and manipulation using The Muhimbi PDF Converter.

Understanding the magical POD Object in K8s – Part 1

In this blog post we will get ourselves acquainted with the basics of the POD object in Kubernetes and use the different kubectl cmdlets to understand the POD object.

We are going to use the Imperative way here of understanding the PODS i.e run single line kubectl cmdlets and once we are through with the basics we will then define a YAML file with our configurations and use kubectl to deploy the YAML to Kubernetes which is a declarative way as opposed to the Imperative approach.

To reduce the complexity this blog series is divided into following parts-

  1. Understanding the magical POD Object in K8s – Part 1
  2. Understanding the magical POD Object in K8s – Part 2

Now, I am not going to explain the basic Kubernetes architecture here so you will need to know a bit about the different components that form the basic Kubernetes cluster.

POD – Deploy, Create, Expose, Delete

A Pod in a Kubernetes cluster is the smallest, most basic deployable object. A Pod represents a single instance of a running process in your cluster.

To understand all of this better, let’s take a look at some examples.

So first we check the resources available by default when we start the Kubernetes cluster.

kubectl get all

The kubectl get cmdlet is used to get information about the different objects in Kubernetes.

As you can see in the image above, the kubectl get all cmdlet returns the default kubernetes service with the name service/kubernetes and with the Cluster IP as 10.96.0.1


To create a POD you must first create a Deployment. Whenever a deployment is created in a Kubernetes, the deployment creates a POD with containers inside of it.

 kubectl create deployment mydeployment --image=nginx:alpine 

As you can see, in the image above we have created a deployment using the kubectl create deployment where we then pass the name of our deployment “mydeployment” here and then the container image i.e nginx:alpine in this case.

If we run the kubectl get all cmdlet now we observe that the following has been additionally created-

  1. Deployment with name “mydeployment”
  2. ReplicaSet

ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a number of replicas indicating how many Pods it should be maintaining, and a pod template specifying the data of new Pods it should create to meet the number of replicas criteria.

As you can see right now the desired number of PODS is 1 and the current no. of PODS is also 1. The ReplicaSet will make sure that if a POD has to get sick and terminated then a template is available for the new POD to take its place and the no. of Desired and Current pods is always fulfilled.


Let’s check if we indeed delete a POD then does another POD automatically take it’s place.

First let’s get the name of the POD.

Export pod=$(kubectl get pods --template '{{range.items}} {{.metadata.name}} {{end}}')
  
Echo Name of pod is: $pod 
kubectl delete pod $pod

As you can see above, when we delete the pod named “mydeployment-55b7c66cdf-hg5pq” and then run the kubectl get all cmdlet, we see that another POD named “mydeployment-55b7c66cdf-d475d” gets created automatically.

If we request one container inside of a POD to be running then the Deployment along with the ReplicaSet makes sure that atleast one POD always remains active with the container running inside it. if a POD or a container has to crash the deployment will automatically replace it.


Pods and containers are only accessible within the Kubernetes cluster by default.

So what if you have some kind of a service running inside a POD and now you want to test it locally ?

What we can indeed do is expose the POD port to communicate with it in other words PORT-FORWARDING

kubectl port-forward $pod 8080:80

As you can see in the image above we have forwarded the default port of the POD i.e 80 to the port of the localhost 8080. The standard format is externalport:port of Pod i.e 8080:80

Now lets open another terminal and try a curl request on port 8080 and check if the nginx hosted inside a container in our port comes up.

curl http://localhost8080

As you can see we are indeed able to get to our nginx application using another terminal and the curl request on port 8080.


So what if we need to delete the POD ??

Note that you can only delete the POD by deleting the Deployment that created it in the first place.

Here is how you do that.

kubectl delete deployment mydeployment

As you can see in the image above, once we delete the Deployment “mydeployment“, it also removes the POD instantiated by it.


That’s it, that’s all the basics you will ever need to get started with the PODS. Although in Production environments you would prefer the declarative approach and not run single line cmdlets but when you are a beginner you would like to actually use the imperative approach and make yourselves comfortable with the basics.

Keep checking this blog for exciting new articles about Kubernetes, AWS and Azure Cloud, Power Automate, SharePoint Online, Power Apps, as well as document conversion and manipulation using The Muhimbi PDF Converter.

Label and Secure your Files in SharePoint Online with Muhimbi

Today, it has become easier than ever to make almost any internal document or PDF available to anyone, anywhere- even if they’re outside of your organization.

This is usually a good thing, but the risk is that someone could send something somewhere, without understanding the consequences until it’s too late, or perhaps just not caring about the consequences to begin with. Many organizations have set specific policies or guidelines in place regarding the protection of documents with proper classification, labeling and access control. This helps with the accidental dissemination of confidential documents, but does little to address the malicious spread of them.

So, what are you to do to stop both the accidental and the malicious spread of confidential documents, while still making them available for people to do their jobs? In this blog post, we’ll answer that question by configuring our SharePoint Online Library with Custom labels and create our own Power Automate solution to copy the label associated with a file, watermark the file with that label, and then use the label to secure the file. All by using the capabilities native to Muhimbi’s PDF Converter Services Online.

Prerequisites –

Before we begin, please make sure the following prerequisites are in place:

Let’s start by setting-up our SharePoint Online library with Labels as follows:

Prompt the User to Select a Label every time a File is being uploaded

Navigate to the Settings page of the Document library and in the page that opens up, move to the section where all the Columns present in the Document library are displayed.

Click on Create Column and then configure a column named Label as shown below-

Please note that the Column has been configured as a Mandatory column meaning whenever a File is being uploaded to the Library, it becomes compulsory to choose a Label for that file.

Step 1 – Trigger (When a File is Created in a Folder)

  • We use the SharePoint trigger ‘When a File is Created in Folder’.
  • For the ‘Site Address’ in the image below, choose the correct site address from the drop down menu.
  • For the ‘Folder Id’ in the image below, select the source folder.

Step1

Step 2 – Get file Metadata

  • For the ‘Site Address’ in the image below, specify the same address as used in the Trigger in Step 1.
  • In the ‘File Identifier’ field, navigate to the ‘Add Dynamic content’ line and choose the x-ms-file-id option inside the ‘When a file is created in a folder’ trigger.

Step2

Step 3 – Get File Properties

  • For the ‘Site Address’ in the image below, choose the correct site address from the drop down menu.
  • For the ‘Library Name’ in the image below, select the correct source folder.
  • In the ‘Id’ field, navigate to the ‘Add Dynamic content’ line and choose the ‘ItemId’ option inside the ‘When a file is created in a folder’ trigger.

Step3

Step 4 – Get file content using Path

  • For the ‘Site Address’ in the image below, choose the correct site address from the drop down menu.
  • For the ‘File Path’ as shown in the image below, navigate to the ‘Add Dynamic content’ line and choose the ‘Full Path’ option inside the ‘Get File Properties’ action.

Step4

Step 5 – Compose action (Grab the Label Value)

  • For the ‘Inputs’ as shown in the image below, navigate to the ‘Add Dynamic content’ line and choose the ‘Label Value’ option inside the ‘Get File properties’ action.

Step 5

Step 6 – Condition to check the Label Value

  • Here we are going to check the Label configured for the source file and based on the Label value, we will decide whether the Source file needs to be Secured or not.
  • If a file has been configured with a Draft label then this indicates that the file is still in the process of being written and approved.
  • This also means that the Stakeholders have not yet reviewed the file and given it the go ahead to be used in business processes.
  • We do not need to SECURE such a file or apply restrictions to it, because the file is a work in progress and so does not hold much significance as compared to a file that has been reviewed and has a Label such as Final configured for it.
  • So here is how you configure the Condition action.

Outputs  is not equal to  Draft

  • On the left hand side of the Condition, navigate to ‘Add Dynamic content’ line and choose ‘Outputs’ (output of the compose action), then choose the parameter is not equal to and on the right hand side of the condition enter the Value ‘Draft’.
  • So, if the source file has a label value other than ‘Draft’, the condition will be satisfied and return a response of True. 

Step 7

Step 6.1 – Condition satisfies to True

As stated earlier, if the condition satisfies to True, that means the label value configured for the source file is not equal to ‘Draft’ and it is either Sensitive or Final. In either of these cases, we should first watermark the source file with the correct Label value and then secure it.

Step 6.1.1 – Add Text watermark

  • For the ‘Source File content’, navigate to ‘Add Dynamic content’ line and choose ‘File Content’ option inside the ‘Get File content using path’ action.
  • For the ‘Watermark content’ as shown in the image below, navigate to ‘Add Dynamic content’ line and select ‘Outputs’ of the Compose action that holds the value of the Label.
  • For the ‘Font family name’, enter Times New Roman (you can choose between Arial, Times New Roman, Calibri)
  • For the ‘Font size’ enter 36 (size of font in Pt)
  • For the ‘Font color’ enter the hex color code for red i.e #FF0000
  • For the ‘Text alignment’, choose Middle center from the options present in drop down menu
  • For the ‘Word wrap’ choose None from the options present in drop down menu
  • For the ‘Position’ choose Middle Center from the options present in drop down menu
  • Enter the ‘Width’ as 400 (In Pt) and ‘Height’ as 400 (In Pt).
  • For the ‘Source file name’ as shown in the image below, navigate to ‘Add Dynamic content’ line and choose ‘File Name with extension’ option from the Get file properties action.
  • For the ‘Layer’, choose Foreground from the drop down menu
  • For the ‘Rotation’ enter the value -45 which implies that the watermark will be rotated in anti clockwise direction to a degree of 45.

Step 8

Step 6.1.2 – Secure Document

  • For the ‘Source File content’, navigate to ‘Add Dynamic content’ line and choose ‘Processed file content’ option from the ‘Add text watermark’ action.
  • For the ‘Source file name’ as shown in the image below, navigate to ‘Add Dynamic content’ line and choose File Name with extension option from the ‘Get file properties’ action.
  • For the ‘Open Password’ as shown in the image below, enter the Open password. Please note that any password entered here is displayed in clear text.

 Open Password – When specified, anyone who wants to open the file will need to enter this password.

  • Similarly for the ‘Owner Password’ as shown in the image below, enter the Owner password. Please note that any password entered here is displayed in clear text.

Owner Password – When specified, anyone who wants to change the security settings on the file will need to enter this password.

  • Note that the PDF restrictions can only be applied to PDF’s and not to the Office file formats (.Docx, .Xlsx, .PPTx). If you want you can use the Muhimbi’s Convert to PDF action to first convert the Office files to PDF and then apply PDF restrictions.
  • You will see that we are still configuring the action with the PDF restrictions below because we do not know if the Source file will be an Office file or a PDF file. 
  • If the Source file is already PDF then the Secure document action will automatically apply the PDF restrictions to the original file and if the source file is an Office file format then these restrictions will get bypassed.
  • Here we are configuring following as PDF restrictions- Print|ContentCopy|FormFields|ContentAccessibility

PDF restrictions – One or more restrictions to apply to the PDF file, separated by a pipe ‘|’ character .

By default it applies all restrictions (Print|HighResolutionPrint|ContentCopy|Annotations|FormFields|ContentAccessibility|DocumentAssembly), but any combination is allowed.

Enter the word Nothing to not apply any restrictions. In order to activate these settings you must supply an owner password.

IMPORTANT NOTE – 

If you do not want the Open or Owner Password to be entered in clear text you can configure a Secret in Azure key vault and pass that Secret in the Open Password and Owner Password fields.

Please check my Blog post on Using Azure Key Vault to avoid passing Credentials in Power Automate

Step 9

Step 6.1.3 – Create file

  • For the ‘Site Address’ in the image below, choose the correct site address from the drop down menu.
  • Select the correct ‘Folder Path’ where the Watermarked and Secured file should be created.
  • For the ‘File name’ as shown in the image below, navigate to ‘Add Dynamic content’ line and choose File Name with extension option from the ‘Get file properties’ action.
  • For the ‘File content’ as shown in the image below, navigate to ‘Add Dynamic content’ line option and choose Processed file content from the ‘Secure Document’ action.

Step 10

Step 6.2 – Condition satisfies to False

As stated earlier if the Condition satisfies to FALSE, then that means that the Label value configured for the Source file is equal to DRAFT, which means we do not need to Secure it, we only need to add a Text watermark.

Step 6.2.1 – Add Text watermark

  • For the ‘Source File content’, navigate to ‘Add Dynamic content’ line and choose ‘File Content’ option inside the ‘Get File content using path’ action.
  • For the ‘Watermark content’ as shown in the image below, navigate to ‘Add Dynamic content’ line and select ‘Outputs’ of the Compose action that holds the value of the Label.
  • For the ‘Font family name’, enter Times New Roman (you can choose between Arial, Times New Roman, Calibri)
  • For the ‘Font size’ enter 36 (size of font in Pt)
  • For the ‘Font color’ enter the hex color code for red i.e #FF0000
  • For the ‘Text alignment’, choose Middle center from the options present in drop down menu
  • For the ‘Word wrap’ choose None from the options present in drop down menu
  • For the ‘Position’ choose Middle Center from the options present in drop down menu
  • Enter the ‘Width’ as 400 (In Pt) and ‘Height’ as 400 (In Pt).
  • For the ‘Source file name’ as shown in the image below, navigate to ‘Add Dynamic content’ line and choose ‘File Name with extension’ option from the Get file properties action.
  • For the ‘Layer’, choose Foreground from the drop down menu
  • For the ‘Rotation’ enter the value -45 which implies that the watermark will be rotated in anti clockwise direction to a degree of 45.

Step 11

Step 6.2.3 – Create File

  • For the ‘Site Address’ in the image below, choose the correct site address from the drop down menu.
  • Select the correct ‘Folder Path’ where the Watermarked and Secured file should be created.
  • For the ‘File name’ as shown in the image below, navigate to ‘Add Dynamic content’ line and choose File Name with extension option from the ‘Get file properties’ action.
  • For the ‘File content’ as shown in the image below, navigate to ‘Add Dynamic content’ line option and choose Processed file content from the ‘Add Text watermark’ action.

Step 10

Perfect, let’s run our Power Automate solution now and check the outputs.

Let us consider a .DOCX file with a Label FINAL configured for it.

SCENARIO – A .DOCX file with FINAL as a LABEL

Source file –

SourceFile

Flow run –

OutputFlowRun

Watermarked and Secured .DOCX File – 

Dest

Password

Output123

Keep checking this blog for exciting new articles about Power Automate, SharePoint Online, Power Apps, as well as document conversion and manipulation using The Muhimbi PDF Converter.

Converting Modern SharePoint Online Pages (ASPX pages) to PDF

One of the most common questions asked by customers of The Muhimbi PDF Converter Services Online regards using The PDF Converter to convert Modern Experience SharePoint Online pages to PDF in conjunction with Microsoft Flow, Logic Apps, and PowerApps.  The short answer is yes, The PDF Converter can certainly do this, the longer answer (how to do it) is the topic of this blog post.

For those not familiar with the product, The Muhimbi PDF Converter Online is one of a line of subscription services that converts, merges, watermarks, secures, and OCRs files from Microsoft Flow, Logic Apps, and PowerApps.  Not to mention your own code using C#, Java, Python, JavaScript, PHP, Ruby and most other modern platforms.

In his post, we’ll show you how to create a Power Automate (Flow) solution to select a  Modern SharePoint Online page from the Site pages App and convert it to PDF.

Prerequisites

Before we begin, please make sure the following prerequisites are in place:

Now, on to the details of how to create a Power Automate (Flow) solution to select a  Modern SharePoint Online page from Site pages App and convert it to PDF.

First, let’s review how the basic structure of our Power Automate (Flow) looks:

Flow

Step 1 – Trigger (For a Selected File)

  • We use the SharePoint trigger ‘For a selected file’.
  • For the ‘Site Address’ in the image below, choose the correct site address from the drop down menu.
  • The Modern Pages or ASPX pages are stored in a special App called Site Pages which is not an App of type Library.  So, our Site Pages app won’t appear in the drop down menu.  Instead, what you will get as choices in the drop down menu are Library names.

NoteWe can basically go ahead and convert any ASPX page sitting in any of the libraries across SharePoint Online. For the sake of this blog post, I am targeting the Site pages (ASPX pages) that are present inside the Site Pages Library.

  • So what we can do is enter the GUID value of the Site Pages App to get all items (in other words Pages) present in the Site Pages.
  • For obtaining the GUID value, navigate to the Library settings and  copy the Encoded GUID value from the URL as shown in the image below-

GUID

  • This is the however not the correct GUID that we want since this is Encoded. We need the GUID in its pure Decoded form.
  • Navigate to this site and enter the copied GUID and click on Decode as shown in the image below.

Decoded

  • Remember we get the Decoded GUID in curly brackets, however while adding this GUID in the ‘Library Name’ option as a custom value you need to enter just the GUID without the curly brackets as shown below.

Capture

Step 2 – Get File Properties

  • For the ‘Site Address’ in the image below, choose the correct site address from the drop down menu.
  • For the ‘Library Name‘, enter the same GUID value as in the previous step.
  • For the ‘Id‘ as shown in the image below, navigate to ‘Add Dynamic Content‘ line and choose ‘ID‘ from the ‘For Selected File‘ action.

Capture1

Step 3 – Convert HTML to PDF

  • In the ‘Source URL or HTML’ section shown in the image below, navigate to ‘Add Dynamic Content‘ line and choose ‘Link to item‘ from the ‘Get file properties‘ action.
  • In the ‘Page orientation’ field, select the appropriate option. Depending on the content and layout of the page ‘Portrait’ may work out best.
  • In the ‘Media type’ field, select the ‘Print’ option from the drop down menu. (This automatically strips out most of the SharePoint User interface).
  • Select ‘SharePoint Online’ as the ‘Authentication type’ from the drop down menu.
  • You will need to enter the correct ‘User name’ and ‘Password’ to get authenticated with the SharePoint Online authentication that you selected in the authentication field above.
  • If you are not comfortable with passing credentials directly in the Power Automate action and in plain text, you can create a Secret in Azure and pass this secret.
  • For more details, check out my blog post on ‘Using Azure Key vault to avoid passing Credentials in Power Automate‘.
  • In the ‘Conversion Delay’ field, enter a delay of 10000 (in milliseconds, so 10 seconds).  This delay will give the page time to load before it is converted.

HTML

Step 4 – Create File

  • For the ‘Site Address’ in the image below, choose the correct site address from the drop down menu.
  • Select the correct ‘Folder Path’ where the converted PDF should be created.
  • Give a meaningful ‘File Name’ to the created PDF, but make sure you remember to add the extension ‘.pdf’ after the ‘File Name’ and to make the file name unique, or multiple runs of the flow will overwrite the same file.  I recommend basing it on the source file name. You can get this by navigating to the Add dynamic content line and choose ‘Name’ inside the ‘Get File properties‘ action.
  • Select the ‘Processed file content’ option, shown in the image below, to populate the ‘File Content’ field.

Name

That’s it, navigate to the Site pages app, select a Site page and run the Power Automate for the selected Page as shown below-

HowToConvert

To see the fruits of our labor, please see below what the Wiki page looks like when viewed in a browser and how it looks as a PDF.

Source Wiki Page –

Original Page

Converted PDF –

Untitled

Microsoft is constantly making changes in the Modern Experience part so we cannot ignore edge cases. The Modern View implementation is updated so often, it is next to impossible to provide a single solution that works for every case.

For more details visit this User voice forum – https://sharepoint.uservoice.com/forums/329214-sites-and-collaboration/suggestions/32229454-printing-modern-pages

Keep checking this blog for exciting new articles about Power Automate, SharePoint Online, Power Apps, as well as document conversion and manipulation using The Muhimbi PDF Converter.

Hyderabad Power Apps and Power Automate User group

Light Virtual Conference –  24-hour Live Conference fundraiser event with speakers around the world speaking on Microsoft Technologies.

Session Agenda –

  • A Canvas application to record Audio via the Microphone control of PowerApps and store it in a AudioCollection.
  • Speech services in Azure portal
  • An Azure function to study cross conversion between different Audio formats
  • Power Automate solution to convert Speech to Text

Demo’s –

  1. Design a Canvas App with The Microphone Control to capture Audio.
  2. Create an Azure Function to convert audio captured in Power Apps from WEBM to WAV format using FFmpeg.
  3. Create a Power Automate (Flow) to create an HTML file, using the text obtained from the output of the Speech to Text action.

LIGHTUP Virtual Conference

Light Virtual Conference –  24-hour Live Conference fundraiser event with speakers around the world speaking on Microsoft Technologies.

Yash Kamdar-Unicef flyer

 

Session Agenda –

  • Introduction to Microsoft Teams
  • Power Automate Teams Connector
  • Triggers Demo (Send an SMS)
  • Send automatic responses, when mentioned on Teams
  • Document Approval process using Adaptive Cards
  • Exporting daily messages from Microsoft Teams
  • Managing Files in Teams

 

Demo’s –

Adaptive Cards for Doc Approval in Microsoft Teams –

Teams-2

Design a Canvas App with The Camera Control to capture Images for Identification/Recognition

This post is part 1 of a 2 part series on Identifying/Recognizing Images captured in Camera control of PowerApps.

In this article, we will capture an Image using the Camera control of PowerApps and then pass it to our Azure Cognitive service for Image Identification/Recognition hosted in Azure environment

This is an advanced topic related to a business scenario since it effectively allows a Power User to consume the Custom Vision API in Azure Cognitive services for Identifying/Recognizing Images.

To reduce the complexity, we will divide this article in four parts:

    1. Design a Canvas App with The Camera Control to capture Image.
    2. Power Automate (Flow) solution to Identify/Recognize an Image

 

Prerequisites-

Before you begin, please make sure the following prerequisites are in place:

  • An Office 365 subscription with access to PowerApps and Power Automate (Flow).
  • An Azure Subscription to host an Image recognition cognitive service.
  • Power Automate Plan for consuming the HTTP Custom actions.
  • Appropriate privileges to create Power Apps and Power Automate(Flow).
  • Working knowledge of both Power Apps and Power Automate(Flow).

 

Designing a Canvas App with The Camera Control to Capture Image

Step 1- Creating the basic structure of The Canvas App

  • Go to powerapps.com, sign in with your work or school account, click on the Apps menu in the left navigation bar, and then click on ‘+Create’ and Select Canvas app from blank.
  • Specify the Name and the Format of the APP and Click on ‘Create’.
60
  • Add a Camera control as below.

cameraimage

  • Now add a Button to the Canvas as shown below.

105

  • Next, rename the Button to ‘Submit’ as shown.

70

 

  • Now that we have the outer body ready, let’s go ahead and configure our components with formulas.

 

Step 2-  Configuring the components with Formula’s

  • We will first configure a collection called “collectphoto” to add a captured Image.
  • Then,we’ll create variable ‘JSONSample’ and set it with the JSON of the Image.
  • Select the ‘OnSelect’ property of the Camera and add the following formula to it:

ClearCollect(collectphoto,Camera2.Photo);Set(JSONSample,JSON(collectphoto,JSONFormat.IncludeBinaryData));

Fomulae

 

  • Now it’s time to add a Power Automate (Flow) to our Power Apps.
  • Inside the Action menu, there is an option to add ‘Power Automate’ to your existing Power Apps. To do this, click on the ‘Power Automate’ option as highlighted.

15

  • Then, click on ‘Create a new Flow’ as shown below.

105

  • Rename the Power Automate (Flow) to “AIImageReCogService” and add ‘PowerApps’ as a trigger .
  • Once that has been done, add a ‘Compose Action’and select ‘CreateFile_FileContent’ from the Dynamic content in the pane on the right side of the image below.
  • Make sure you click on Save.

AIImageRecog

Note – We completed these steps in the Power Automate (Flow) just so that we can get the Power Automate (Flow) added to our Power Apps. Later in this article,  we will add more actions to the Power Automate (Flow), so as to carry out a Image Recognition.

  • Finally select the ‘OnSelect’ property of the ‘Submit’ button and add the following formula.

AIImageReCogService.Run(First(collectphoto).Url)

  • The above formula tells the ‘Submit’ button to trigger a Power Automate (Flow) with name ‘AIImageReCogService’ created earlier using the .Run() method in which we are passing the JSON of the first image sample sitting in the Image Collection, ‘collectphoto‘.
  • Now that we have our Power App ready, let’s head towards configuring the rest of the Power Automate solution.

Create a LUIS application for Smart Email Management

I have often wondered,  What if there was a way for us to talk to our systems in our own Natural Language.  What if our applications were able to Interpret and Understand our Language and then carry out Predefined tasks in an Intelligent manner by following a strict Prediction Model.

Well no more wondering as Microsoft has introduced “Language Understanding Intelligent Service” – LUIS.

LUIS is a cloud-based service to which you can apply customized machine learning capabilties by passing in Utterances (natural language) to predict overall Intent, and thereby pull out relevant detailed information.

 

In this article, we are taking a real world scenario where a Support team is trying to implement LUIS on a Common Shared Mailbox so that the Intelligent Service can read the Message Body of each email and based on the Prediction Model understand the correct Microsoft Team Channel where the email needs to be assigned.

Diag

 

To reduce the complexity, we will divide this article in two parts:

  1. Design and Train our LUIS Application
  2. Create a Power Automate solution for Implementing Smart Email Management based on LUIS Predictions.

 

Prerequisites-

Before you begin, please make sure the following prerequisites are in place:

 

 

Step 1 – Building the LUIS Application

  • Sign in to LUIS portal .
  • Once successfully signed in, on the page that appears select ‘+ New app for conversation’ as shown in the image below.

Add

 

  • A pop up form appears where you need to fill in the basic details for the LUIS application like ‘Name‘, ‘Culture‘ basically the language which the LUIS application should expect and the ‘Description‘.
  • Once this information is filled up, Click on ‘Done‘.

App2

 

Step 2 – Utterances and Intents ??

We now proceed by creating Intents for our LUIS application but wait what exactly is an Intent you ask ???

  • An Intent represents an action that the user wants to perform.
  • If you remember the image we saw a couple of minutes before, the intent here is to classify emails. That’s it, let’s keep it simple. So when I say classify I need to know the categories for the classification right !! These categories will be our Intents.

 

  • We need to assign emails to one of the three categories which are our Intents, namely –
    • OnPremise team
    • Online team
    • Sales team

 

Step 2.1 Creating Intents

  • Picking up where we left, once your LUIS application has been created you will be navigated to a page where you will see an option called as ‘Intent‘ in the left navigation.
  • Select the ‘Intents‘ option and click on ‘+ Create‘ as shown in the image below-

Intent1

  • On the Pop up box that opens up enter the correct ‘Intent name‘ and click on ‘Done‘.
  • Do this for ‘Ticket_OnPremise‘, ‘Ticket_Online‘ and ‘Sales‘.

TicketOP

 

Online

 

Sales

 

Step 2.2 – Creating Utterances

  • Utterances are inputs from the user or a system that your LUIS application will receive.
  • The LUIS application needs to understand and interpret these utterances to extract intents and entities from them, and so it is extremely important to capture a variety of different example Utterances for each intent.
  • Basically you need to type in the Utterances i.e the expected words that your users will normally be writing in the email messages being received by your shared mailbox.
  • Navigate to the ‘Intents’ that we have created in Step 2.1 and start writing Utterances as shown in the image below.

SampleUtterance

 

  • If you have closely observed the image above, I have written an Utterance

How do I decide the no. of Application, Web Front end and Search servers needed to be configured in my SharePoint 2019 environment

  • Once you write an Utterance and press Enter, LUIS starts breaking the Utterance and keeps a track of keywords inside the Utterance.
  • The more no. of times a particular word starts appearing in the sample Utterance the more confident the LUIS becomes in predicting the Intent and thus higher the Prediction score for a particular intent.

 

  • Please take a look at the sample Utterances across Intents that I have configured for our LUIS application.

 

Ticket_OnPremise Utterances –

Score

 

Ticket_Online Utterances –

OnlineUt

 

Sales Utterances – 

SalesUT

  • Now that you have seen my sample Utterances, let’s go ahead and Train our LUIS application.
  • But WAIT !! Did you notice that in all the images above the ‘TRAIN‘ button at the top is showing a red color.
  • That is basically an intimation from the LUIS application for you that you have Untrained utterances registered against Intents in your LUIS application.

 

Step 3 – Train the LUIS application

  • Now that we have built up the basic structure of our LUIS application let us go ahead and train it. We have already been receiving intimations from the LUIS application that it has untrained utterances across intents present with it.
  • Just navigate to the top of the page and hit the ‘Train‘ button.
  • The LUIS application will start training itself and show you notification stating that it is collection the training data as shown in the image below-

Train

  • Sit back and relax, it will take some time for the LUIS application to train itself.
  • Once the training is finished, the LUIS application will notify you that the training is completed.
  • Now it is time to test our LUIS application before we go ahead and Publish it.

 

Step 4 – Test the LUIS application

  • Click on the ‘Test’ button from the top navigation and it opens up a test environment for us as shown in the image below.
  • Here what we can do is type sample utterances once again and see if the LUIS applications (after training) is able to predict the Intents correctly.

Train

  • Let’s for example type a sample utterance and hit Enter –

One of the actions in my Power Automate solution keeps failing

  • As you can see in the image below, LUIS quickly runs the test utterance and posts a result. It has correctly predicted that the correct intent is ‘Ticket_Online’ which is also the ‘Top-scoring Intent‘ with a score of 0.483 which is the highest still a poor confidence right now because this is just our first test.
  • You need to keep training the LUIS app with more and more utterances so that it’s confidence keeps increasing.

Yo

 

  • Let’s go ahead and test another utterance and see if this time the confidence i.e ‘Intent Score‘ increases or not.

test2

  • There you go !!! If you observe this time the ‘Top-Scoring Intent’ has a score of 0.723 which simply means that the LUIS application is more confident not since the last utterance about the intent.
  • So basically the more utterances are passed, the more the LUIS application will become intelligent.

 

Step 5 – Publish the LUIS application

  • That’s it, we are done here.
  • If you think about it, now that you know the basics it is so easy to go ahead and configure a LUIS application which at the start may seem like a daunting task.
  • Just navigate to the top of your screen and click on the ‘Publish’ button.
  • A pop up form opens up asking for the Slot in which the LUIS application needs to be published, just select Production and click on Done.

publish

 

Next we will be creating a Power Automate solution to grab the ‘Prediction‘ and in turn the ‘CorrectIntent‘ exposed by the ‘LUIS application‘, based on which we will Automate Decision Making.

Power Automate (Flow) solution to Identify/Recognize an Image

This post is part 2 of a 2 part series on Image Identification/Recognition using Azure Cognitive Services.

In our previous post we created a Canvas application so as to capture an Image using the Camera control of PowerApps and then pass it to our Azure Cognitive service for Image Identification/Recognition hosted in Azure environment

To reduce the complexity, we have divided this article in two parts:

  1. Design a Canvas App with The Camera Control to capture Images for Identification/Recognition
  2. Power Automate (Flow) solution to Identify/Recognize an Image

Prerequisites-

Before you begin, please make sure the following prerequisites are in place:

  • An Office 365 subscription with access to PowerApps and Power Automate (Flow).
  • An Azure Subscription to host an Image recognition cognitive service.
  • Power Automate Plan for consuming the HTTP Custom actions.
  • Appropriate privileges to create Power Apps and Power Automate(Flow).
  • Working knowledge of both Power Apps and Power Automate(Flow).

 

Now that the prerequisites are in place, Go back to the “Image RecognitionPower Automate (Flow) that we started creating earlier in the step 2 of this blog.

 

Step 3 – Create File

  • For the ‘Site Address’ as shown in the image below, enter the correct site address from the drop down menu where you intend to create the image file against the image captured in Camera control of PowerApps.
  • For the ‘Folder Path‘, select the appropriate Library using the Folder menu on the right hand side.
  • For the ‘File Name‘, give a name as per your choice but make sure you do not forget to add the ‘.jpg‘ extension.
  • For the ‘File content‘ as shown in the image below, Navigate to Add to Dynamic content line and select ‘Outputs‘ option available inside the ‘Compose‘ action.

Create file

 

Step 4 – HTTP Trigger

  • Insert an ‘HTTP‘ action and select the ‘Post‘ method from the drop down menu.
  • For the ‘URI‘ as shown in the image below, enter the below mentioned URI where we will be making a simple call to our API i.e the Image Recognition Cognitive Service hosted in the Azure environment.

https://westcentralus.api.cognitive.microsoft.com/vision/v2.0/analyze?visualFeatures=%5B“Categories&#8221;,”Adult”,”Tags”,”Description”,”Faces”,”Color”,”ImageType”,”Objects”,”Brands”]&language=en

  • For the ‘Headers‘ as shown in the image below, you need to enter the ‘Ocp-Apim-Subscription-key‘ as well as the ‘Content-type‘.
  • You can obtain the ‘Ocp-Apim-Subscription-key‘ from the Azure environment which is basically the ‘Subscription key‘ that you obtain once you have configured the Cognitive service.
  • For the ‘Queries‘ as shown in the image below, enter the value as ‘Description,Tags‘ against the ‘visualfeatures‘.
  • For the ‘Body‘ field, Navigate to ‘Add dynamic content‘ line and choose value ‘Outputs‘ available under the ‘Compose‘ action.

HTTP

 

Step 5 – Parse JSON

  • Next we need to parse the response obtained from The Cognitive Services API i.e the HTTP post request in order to extract the identified/recognised text.
  • For the ‘Content‘ as shown in the image below, Navigate to the Add dynamic content line and choose ‘Body‘ option available inside ‘HTTP‘ action.
  • For the ‘Schema‘ please use the payload as below-
{
    “type”: “object”,
    “properties”: {
        “tags”: {
            “type”: “array”,
            “items”: {
                “type”: “object”,
                “properties”: {
                    “name”: {
                        “type”: “string”
                    },
                    “confidence”: {
                        “type”: “number”
                    }
                },
                “required”: [
                    “name”,
                    “confidence”
                ]
            }
        },
        “description”: {
            “type”: “object”,
            “properties”: {
                “tags”: {
                    “type”: “array”,
                    “items”: {
                        “type”: “string”
                    }
                },
                “captions”: {
                    “type”: “array”,
                    “items”: {
                        “type”: “object”,
                        “properties”: {
                            “text”: {
                                “type”: “string”
                            },
                            “confidence”: {
                                “type”: “number”
                            }
                        },
                        “required”: [
                            “text”,
                            “confidence”
                        ]
                    }
                }
            }
        },
        “requestId”: {
            “type”: “string”
        },
        “metadata”: {
            “type”: “object”,
            “properties”: {
                “width”: {
                    “type”: “integer”
                },
                “height”: {
                    “type”: “integer”
                },
                “format”: {
                    “type”: “string”
                }
            }
        }
    }
}

ParseJP

 

Step 6 – Initialize Variable

  • Add an ‘Initialize Variable‘ action and for the ‘Name‘ as shown in the image below, give a meaningful name like ‘OutputText‘.
  • For the ‘Type‘, select ‘String‘ from the drop down menu.

InitVar

 

Step 7 – Apply to each

  • The ‘Parse JSON‘ action will parse the output of the Image Recognition Cognitive Services API.
  • We will add an Apply to each conditional logic to loop through all the available properties parsed by our Parse JSON action.
  • The property that preserves the recognized text is ‘Caption‘ which is a property native to and spit by our Image Recognition Cognitive Service hosted in Azure.
  • In the ‘Select an output from previous step‘ as shown in the image below, Navigate to Add dynamic content line and choose ‘Captions‘ available under the ‘Parse JSON‘ action.
  • Next we will set the variable ‘OutputText‘ initialized in the previous steps with the ‘text‘ property inside the Captions object that has the final identified/recognised text.
  • Add a ‘Set Variable‘ action and for the ‘Name‘ select the correct variable i.e ‘OutputText‘ from the drop down menu.
  • For the ‘Value‘ as shown in the image below, Navigate to Add dynamic content line and choose option ‘Text‘ available inside ‘ParseJSON‘ action.

Apply

 

Step 8 – Update File properties

  • Now that we have the identified/recognized text and safely preserved it in our OutputText variable, let us go ahead and update the Image file created in SharePoint with the Identified/Recognized text.
  • I have simply created a column in my SharePoint library where we create the captured image from PowerApp with the name of the column being ‘RecogText‘.
  • Add an ‘Update file properties‘ action and for the ‘Site Address‘ as shown in the image below, select the correct site address.
  • For the ‘Library Name‘ select the correct library name from the drop down menu where we create our image.
  • For the ‘Id‘ field, Navigate to Add dynamic content line and select ‘ItemId‘ option available under the ‘Create file‘ action.
  • For the ‘RecogText‘ field available once you enter the correct site address and the library name, Navigate to Add dynamic content line and select OutputText available under the list of variables.

Update

 

That’s all that is needed to do.

Let us now go ahead and check if our Power Automate solution is indeed working as per our expectation and spitting the right text against the identified image.

 

Photo captured in Camera control of PowerApps-

CapturedImage

 

Image Created in SharePoint Library and Recognized Text –

Output

Create a Power Automate solution for Implementing Smart Email Management based on LUIS Predictions

This post is part 2 of a 2 part series on Creating a LUIS application for Smart Email Management.

In our previous article, we built and deployed a LUIS application, trained it by passing in example ‘Utterances‘ for created ‘Intents‘ and performed tests so as to check if the LUIS is able to provide correct ‘IntentScores‘ for the test ‘Utterances‘.

 

To reduce the complexity, we will divide this article in two parts:

  1. Design and train our LUIS application
  2. Power Automate solution for Implementing Smart Email Management based on LUIS Predictions.

 

In this article, we will create a Power Automate solution to grab the ‘Prediction‘ and in turn the ‘CorrectIntent‘ exposed by the ‘LUIS application‘, based on which we will Automate Decision Making.

Our Primary Goal behind this solution is to implement smart sorting of emails (by passing the email body to LUIS) that are received/dropped in a Shared Mailbox folder and thereby automate assignment of those emails to correct Microsoft Team channels based on ‘Intent‘ and ‘Prediction Score‘.

 

Prerequisites-

Before you begin, please make sure the following prerequisites are in place:

 

Let’s start configuring our Power Automate (Flow) solution then.

Step 1 – When a new email arrives

  • For the ‘Folder‘ as shown in the image below, select the appropriate folder of the mailbox using the ‘Folder menu‘ present in the right hand side.
  • The ‘Inbox‘ folder of the mailbox selected, as shown in the image below, is the same folder where all emails will be received/dropped before we go ahead and implement smart sorting using LUIS predictions.

Trigger

 

Step 2 – Get Prediction

  • For the ‘App Id‘ as shown in the image below, click on the drop down menu and from the list of LUIS applications, select the correct LUIS application present in your subscription.
  • For the ‘Utterance text‘, Navigate to the ‘Add dynamic content‘ line and choose ‘Subject‘ or ‘Body Preview‘ present inside ‘When a new email arrives‘ action.

GetPrediction

 

Step 3 – Initialize Variable

  • Next, Add an ‘Initialize Variable‘ action and for the ‘Name‘ as shown in the image below, enter ‘CorrectIntent‘.
  • For the ‘Type‘, select ‘String‘ from the drop down menu.

InitVar

 

Step 4 – Parse JSON

  • For the ‘Content‘ as shown in the image below, Navigate to the ‘Add dynamic content‘ line and choose LUIS Prediction available under the ‘Get Prediction‘ action.
  • For the ‘Schema‘, please enter the following schema-
{
    “type”: “object”,
    “properties”: {
        “query”: {
            “type”: “string”
        },
        “topScoringIntent”: {
            “type”: “object”,
            “properties”: {
                “intent”: {
                    “type”: “string”
                },
                “score”: {
                    “type”: “number”
                }
            }
        },
        “intents”: {
            “type”: “array”,
            “items”: {
                “type”: “object”,
                “properties”: {
                    “intent”: {
                        “type”: “string”
                    },
                    “score”: {
                        “type”: “number”
                    }
                },
                “required”: [
                    “intent”,
                    “score”
                ]
            }
        },
        “entities”: {
            “type”: “array”,
            “items”: {
                “type”: “object”,
                “properties”: {
                    “entity”: {
                        “type”: “string”
                    },
                    “type”: {
                        “type”: “string”
                    },
                    “startIndex”: {
                        “type”: “integer”
                    },
                    “endIndex”: {
                        “type”: “integer”
                    },
                    “score”: {
                        “type”: “number”
                    }
                },
                “required”: [
                    “entity”,
                    “type”,
                    “startIndex”,
                    “endIndex”,
                    “score”
                ]
            }
        }
    }
}

ParseJP

 

 

Step 5 – Compose

  • Next, Add a ‘Compose‘ action and for the ‘Inputs‘ as shown in the image below, Navigate to ‘Add dynamic content‘ line and choose ‘Intent‘ option available under ‘Parse JSON‘ action.

Compose

 

Step 6 – Set Variable

  • Now we will set the ‘CorrectIntent‘ variable created above.
  • For the ‘Name‘ as shown in the image below, select the variable ‘CorrectIntent‘ from the drop down menu.
  • For the ‘Value‘ as shown in the image below, Navigate to the ‘Add dynamic content line‘ and choose ‘Outputs‘ option available under the ‘Compose‘ action.

SetVar

 

Step 7 – Switch case

  • Next, add a ‘Switch‘ conditional action.
  • For the ‘On‘ field as shown in the image below, Navigate to the ‘Add dynamic content‘ line and select ‘CorrectIntent‘ available inside the ‘Variables‘ section.
  • Basically what we are doing here is matching the CorrectIntent i.e the Intent that has scored the highest among all the intents based on the pre-trained prediction model.
  • When the ‘CorrectIntent‘ matches the ‘Equals‘ parameter it will switch to that particular case.
  • If you closely observe the ‘Cases‘, the names for each Case is same as that of the Intents created in the LUIS application that we deployed earlier in part 1.
  • Since we want to assign email message to a particular Microsoft team channel based on the LUIS Predictions and CorrectIntent let’s go ahead and create some channels.
  • As shown in the image below, add ‘Post a message‘ action inside each of the Cases.
  • For the ‘Team‘, select the correct Team from the drop down menu. Similarly for the ‘Channel‘ select the appropriate team channel from the drop down menu.
  • In the ‘Message‘ field inside the ‘Post a message‘ action as shown in the image below, configure a message and navigate to the ‘Add dynamic content‘ line and choose ‘Utterance text‘ available inside the Get Prediction action so that the message present in the received email gets copied.

Post

 

Now that we have configured the Power Automate solution, let’s forward an email message to our Shared Mailbox folder and check if the LUIS application is indeed able to correctly predict the Intent and based on CorrectIntent if the message gets posted in the correct Microsoft Team channel.

Email sent to the Inbox folder of the Shared Mailbox-

Email

 

Scores for different Intents as predicted by LUIS and Top scoring Intent-

IntentScoring

TopScore

 

Email message smart sort and uploaded as a message in correct Microsoft Team channel-

FinalOutput