Running Python triggered from Microsoft Flow - python

Would anyone know what is the approach to call Python application/script (which relies on imported libraries like pandas) from Microsoft Flow?
The complete problem is like this. Client uploads something to Dropbox (it is how his ERP works). This action is linked to our Microsoft Flow so whenever he uploads it, Microsoft Flow registers it (now it just redirects it to us). What I need to do after is run the Python application/script on that file in Dropbox after he uploads it. I can put my Python directly into that Dropbox. What I do not know is how to trigger it and ensure it runs (is interpreted).
Thank you for any suggestions.

To execute any code locally from MS flow, install and configure On premises gateway as per the instructions in the below link
https://learn.microsoft.com/en-us/flow/gateway-reference
Create a flow to copy the new file from dropbox to the local filesystem
Create a watcher to the local directory for any new file creation and execute the python code for the new file.
Refer the below link for the powershell script execution. Similar process can be designed for python
https://flow.microsoft.com/en-us/blog/flow-of-the-week-local-code-execution/

Related

How can I get the token-sheets.pickle and token-drive.pickle files to use ezsheets for Google Sheets?

I am trying to set up ezsheets for the use with Google Sheets. I followed the instructions from here https://ezsheets.readthedocs.io/en/latest/ and here https://automatetheboringstuff.com/2e/chapter14/
The set up process works quite differently on my computer: Somehow I could download the credentials-sheets.json. I need to download the token-sheets.pickle and token-drive.pickle files. When I run import ezsheets, no browser window is opended as described in the set up instructions. Nothing happens.
Is there another way to download both files?
I followed the steps you referenced and managed to generate the files, but I also encountered the same issue before figuring out the cause. The problem is that there are a few possible causes and the script silently fails without telling you exactly what happened.
Here are a few suggestions:
First off you need to configure your OAuth Consent Screen. You won't be able to create the credentials without it.
Make sure that you have the right credentials file. To generate it you have to go to the Credentials page in the Cloud Console. The docs say that you need an OAuth Client ID. Make sure that you have chosen the correct app at the top.
Then you will be prompted to choose an application type. According to the docs you shared the type should be "Other", but this is no longer available so "Desktop app" is the best equivalent if you're just running a local script.
After that you can just choose a name and create the credentials. You will be prompted to download the file afterwards.
Check that the credentials-sheets.json file has that exact name.
Make sure that the credentials-sheets.json file is located in the same directory where you're running your python script file or console commands.
Check that you've enabled both the Sheets and Drive API in your GCP Project.
Python will try to setup a temporary server on http://localhost:8080/ to retrieve the pickle files. If another application is using port 8080 then it will also fail. In my case a previously failed Python script was hanging on to that port.
To find and close the processes using port 8080 you can refer to this answer for Linux/Mac or this other answer for Windows. Just make sure that the process is not something you're currently using.
I just used the single import ezsheets command to get the files so after getting the token-sheets.pickle I had to run it again to get the token-drive.pickle, but after that the library should detect that you already have the files.

Python program crashes when run through a Windows service

My client has provided me with a Python console application which performs some work and writes the result into a .txt file. My task is to write a Windows service which reads that particular .txt file and performs further actions.
I used C# on .NET to write the service. My solution contains 3 projects:
The logic layer project.
The Windows service layer project.
A test app layer project, used for debugging and other purposes.
Both the Windows service layer and test app layer are using the logic layer for core functionality. When I run the application through the test layer, everything works perfectly, but whenever I try to run the application through the service, the Python standalone application that the service launches doesn't write any output files. I could see that the Python app runs in the task manager, but there's no output anywhere. I believe the Python code is crashing but I couldn't get the exact reason.
I've tried the following ways to debug the issue:
Searched the Windows and System32 directories for any related output files, just to consider the possibility of the service having these directories as the default working directory.
Used absolute paths in the service code to make sure that the Python part is not writing output files to some unknown location.
Had the client implement passing the output directory to the Python code through command line arguments.
Wrote a mock console app in C# which writes a file, tried to call it through the service, but it worked fine and wrote the file as expected.
Suspected the standard IO could be causing the Python application to crash and thus used the standard IO in my mock program, but it worked without any issues.
Tried giving a long task to the Python code, which should've taken about 30 minutes to execute completely, but the Python script ran and closed immediately, which essentially is reliable proof of the theory that it crashes at some point.
Tried running the service with my unelevated Windows user instead of the Local System pseudouser.
Tried configuring the service to be able to interact with the desktop.
I am all out of ideas here. Any direction I should also search in?
Just a note, if that matters: I am using System.Diagnostics.Process to launch the Python script.
If it works from your test app, it sounds like a permissions issue. What security context / user is the windows service running as, and does that user have permission to write to the filesystem where you are expecting it? Have you tried using a full path to the output file to be sure where it is expected?
I'd be inclined to write a tiny python app that just saves "hello world" to a file, and get that to work from a windows service, then build up from there.
Thanks to the help from timhowarduk, I finally was able to get to the root cause of the problem. The python script was looking for a configuration file, and when it was running from the Windows Service, it was looking for that config file in System32.
All the windows services are run from System32.
The above caused the python script to search in System32 since it was running as part of the windows service. I guess I might just ask the client to edit the python script to read config from the windows service application directory.

How can I execute custom code using Flow?

I've made a PowerApps app which uploads an image to SharePoint. When Flow detects that this image is uploaded, I want to run a custom script that can interact with the Excel file. PowerShell should accomplish that, but I'm completely lost when it comes to running the PowerShell code from Flow.
My goal is to use an Excel macro to combine the image and an Excel file that is stored in the same location in SharePoint. PowerShell will execute the macro and delete the picture after.
I've found this guide "https://flow.microsoft.com/en-us/blog/flow-of-the-week-local-code-execution/", but I don't think it will work for me as the app will be running on more devices than just my local computer.
What technology can I use to run code using Flow as a trigger? The code must have access to a specific SharePoint site as well.
I believe you can create an Azure Function that will execute PowerShell. This will execute from the cloud rather than on your local machine.
I'd also like to add a solution that worked great for me: Using Flow to send HTTP requests to a REST API!

run a python script inside azure datafactory that call APIs using MSI

We have a rest API hosted on Azure as a web application which provides a json output when invoked.
Inside datafactory we need to run a databrick activity that have a python code, currently we store the certifications inside the script, the python script call the URL/web app with the certification and we do the magic.
But we don't want to store the certifications, and we are thinking on using MSI, is it possible for the python script retrieve the certifications of MSI and call the API?
i thought of having a webapp activity before the databrick activity with MSI and pass it as an input not sure if that a good idea.
any one know how to pass MSI certification to python to use a webapp inside azure?
Referring this link
https://learn.microsoft.com/en-us/python/azure/python-sdk-azure-authenticate?view=azure-python
but not sure what do i need to get the credentials, the resourceID? an applicationId?
i appreciate if some one has a small script/ example to share =)
thanks guys.

Running an exe file on the azure

I am working on an azure web app and inside the web app, I use python code to run an exe file. The webapp recieves certain inputs (numbers) from the user and stores those inputs in in a text file. Afterwards, an exe file would run and read the inputs and generate another text file, called "results". The problem is that although the code works fine on my local computer, as soos as I put it on azure, the exe file does not get triggered by the following line of code:
subprocess.call('process.exe',cwd = case_directory.path, shell= True)
I even tried running the exe file on Azure manually from the Visual Studio Team Services (was Visual Studio Online) by "running from Console" option. It just did not do anything. I'd appreciate if anyone can help me.
Have you looked at using a WebJob to host\run your executable from? A WebJob can be virtually any kind of script or win executable. There are a number of ways to trigger your WebJob. You also get a lot of buit in monitoring and logging for free as well, via the Kudu interface.
#F.K I searched some information which may be helpful for you, please see below.
Accroding to the python document for subprocess module, Using shell=True can be a security hazard. Please see the warning under Frequently Used Arguments for details.
There is a comment in the article which gave a direction for the issue, please see the screenshot below.
However, normally, the recommended way to satisfy your needs is using Azure Queue & Blob Storage & Azure WebJobs to save the input file into a storage queue, and handling the files got from queue and save the result files into blob storage by a continuous webjob.

Categories

Resources