I've made a PowerApps app which uploads an image to SharePoint. When Flow detects that this image is uploaded, I want to run a custom script that can interact with the Excel file. PowerShell should accomplish that, but I'm completely lost when it comes to running the PowerShell code from Flow.
My goal is to use an Excel macro to combine the image and an Excel file that is stored in the same location in SharePoint. PowerShell will execute the macro and delete the picture after.
I've found this guide "https://flow.microsoft.com/en-us/blog/flow-of-the-week-local-code-execution/", but I don't think it will work for me as the app will be running on more devices than just my local computer.
What technology can I use to run code using Flow as a trigger? The code must have access to a specific SharePoint site as well.
I believe you can create an Azure Function that will execute PowerShell. This will execute from the cloud rather than on your local machine.
I'd also like to add a solution that worked great for me: Using Flow to send HTTP requests to a REST API!
Related
To start I managed to successfully run pywin32 locally where it opened the Excel workbooks and refreshed the SQL Query then saved and close them.
I had to download those workbooks locally from Sharepoint and have them sync to apply the changes using one drive.
My Question is would this be possible to do within Sharepoint itself ? Have a python script scheduled on a server and have the process occur there in the backend through a command.
I use this program called Alteryx where I can have batch files execute scripts and maybe I could use an API of some sort to accomplish this on a scheduled basis since thats the only server I have access to.
I have tried looking on this site and other sources but I can't find a post where it would reference this specifically.
I use Jupyter Notebooks to write my scripts and Alteryx to build a workflow with those scripts but I can use other IDEs if I need to.
I am trying to set up ezsheets for the use with Google Sheets. I followed the instructions from here https://ezsheets.readthedocs.io/en/latest/ and here https://automatetheboringstuff.com/2e/chapter14/
The set up process works quite differently on my computer: Somehow I could download the credentials-sheets.json. I need to download the token-sheets.pickle and token-drive.pickle files. When I run import ezsheets, no browser window is opended as described in the set up instructions. Nothing happens.
Is there another way to download both files?
I followed the steps you referenced and managed to generate the files, but I also encountered the same issue before figuring out the cause. The problem is that there are a few possible causes and the script silently fails without telling you exactly what happened.
Here are a few suggestions:
First off you need to configure your OAuth Consent Screen. You won't be able to create the credentials without it.
Make sure that you have the right credentials file. To generate it you have to go to the Credentials page in the Cloud Console. The docs say that you need an OAuth Client ID. Make sure that you have chosen the correct app at the top.
Then you will be prompted to choose an application type. According to the docs you shared the type should be "Other", but this is no longer available so "Desktop app" is the best equivalent if you're just running a local script.
After that you can just choose a name and create the credentials. You will be prompted to download the file afterwards.
Check that the credentials-sheets.json file has that exact name.
Make sure that the credentials-sheets.json file is located in the same directory where you're running your python script file or console commands.
Check that you've enabled both the Sheets and Drive API in your GCP Project.
Python will try to setup a temporary server on http://localhost:8080/ to retrieve the pickle files. If another application is using port 8080 then it will also fail. In my case a previously failed Python script was hanging on to that port.
To find and close the processes using port 8080 you can refer to this answer for Linux/Mac or this other answer for Windows. Just make sure that the process is not something you're currently using.
I just used the single import ezsheets command to get the files so after getting the token-sheets.pickle I had to run it again to get the token-drive.pickle, but after that the library should detect that you already have the files.
I am developing a python script that downloads some excel files from a web service. These two files are combined with another one stored in my computer locally to produce the final file. This final file is loaded to some database and PowerBI dashboard to finally visualize data.
My question is: How can I schedule this to run it daily if my computer is turned off? As I said, two files are web scraped (so no problem to schedule) but one file is stored locally.
One solution that comes to my mind: Store the local file in Google Drive/OneDrive and download it with the API so my script is not dependent of my computer. But if this was the case, how can I schedule that? What service would you use? Heroku,...?
I am not entirely sure about your context, but I think you could look into using AWS Lambda for this. It is reasonably easy to set it up and also create a schedule for running code.
It is even easier to achieve this using the serverless framework. This link shows an example built with Python that will run on a schedule.
I am running the schedule package for exactly something like that.
It’s easy to setup and works very well.
Would anyone know what is the approach to call Python application/script (which relies on imported libraries like pandas) from Microsoft Flow?
The complete problem is like this. Client uploads something to Dropbox (it is how his ERP works). This action is linked to our Microsoft Flow so whenever he uploads it, Microsoft Flow registers it (now it just redirects it to us). What I need to do after is run the Python application/script on that file in Dropbox after he uploads it. I can put my Python directly into that Dropbox. What I do not know is how to trigger it and ensure it runs (is interpreted).
Thank you for any suggestions.
To execute any code locally from MS flow, install and configure On premises gateway as per the instructions in the below link
https://learn.microsoft.com/en-us/flow/gateway-reference
Create a flow to copy the new file from dropbox to the local filesystem
Create a watcher to the local directory for any new file creation and execute the python code for the new file.
Refer the below link for the powershell script execution. Similar process can be designed for python
https://flow.microsoft.com/en-us/blog/flow-of-the-week-local-code-execution/
I am working on an azure web app and inside the web app, I use python code to run an exe file. The webapp recieves certain inputs (numbers) from the user and stores those inputs in in a text file. Afterwards, an exe file would run and read the inputs and generate another text file, called "results". The problem is that although the code works fine on my local computer, as soos as I put it on azure, the exe file does not get triggered by the following line of code:
subprocess.call('process.exe',cwd = case_directory.path, shell= True)
I even tried running the exe file on Azure manually from the Visual Studio Team Services (was Visual Studio Online) by "running from Console" option. It just did not do anything. I'd appreciate if anyone can help me.
Have you looked at using a WebJob to host\run your executable from? A WebJob can be virtually any kind of script or win executable. There are a number of ways to trigger your WebJob. You also get a lot of buit in monitoring and logging for free as well, via the Kudu interface.
#F.K I searched some information which may be helpful for you, please see below.
Accroding to the python document for subprocess module, Using shell=True can be a security hazard. Please see the warning under Frequently Used Arguments for details.
There is a comment in the article which gave a direction for the issue, please see the screenshot below.
However, normally, the recommended way to satisfy your needs is using Azure Queue & Blob Storage & Azure WebJobs to save the input file into a storage queue, and handling the files got from queue and save the result files into blob storage by a continuous webjob.