So I have task of automating a workflow such that:
Whenever an excel file (a.xlsx) is added/modified to the SharePoint Folder ->
My custom data extractor code will process this excel file ->
Extracted data will be stored as a new excel file (b.xlsx) in another folder on SharePoint.
This has to be achieved using Power Automate or Logic Apps with Azure Functions. But I am not able to wrap my head around how to go about this.
Has anyone implemented something like this before?
PS: My code is in Python.
So, when a.xlsx is created or updated, you want to perform some action to that file before save as b.xlsx in another folder.
If it is something that cannot be done just using Power Automate/Logic Apps, you can insert a azure function to your flow in 2 different ways:
Using an Azure Function Action (more here)
Using an Http Action (more here)
You will need an azure function of type http trigger
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook-trigger?tabs=in-process%2Cfunctionsv2&pivots=programming-language-python
If you can share what you need to do before save as b.xlsx I may be able to help more
Related
I have a Power BI template set up and I need to create weekly reports using this template. I only use a single CSV file per report and the structures of the files are all identical (same number of columns, same headers, etc).
NOTE: A new CSV file is downloaded and used every time I make a report. In other words, every week I download a new CSV file and make a new report from that file.
Right now, my Power BI template has a parameter that asks the user to input the file path to the CSV file before loading the report. I want to know if there is a way I can automate the following:
Opening the Power BI template file
Inputting the file path into the parameter field
Pressing the "Load" button
I understand that I could use Python and PyAutoGUI to control my desktop and, by extension, Power BI. Just wanted to know if there's another way to automate?
I understood your task. Here is number of step you need to take.
Download and install the PowerBi personal gateway on you machine.
https://powerbi.microsoft.com/en-us/gateway/
Go to settings of your dataset in PowerBi service (in your browser)
https://i.stack.imgur.com/vE9Bc.png
The gateway installed on your machine should be detected like in this picture https://i.stack.imgur.com/f3MS1.png
Setup a refresh schedule in this part of the dataset settings https://i.stack.imgur.com/LByMO.png
Once all of this is done, you do not need to manually do upload/publishing from your machine. Gateway will upload your new and fresh data to the report on its own.
Cheers!!
I can manually access some Quicksight dashboards and hit "export to CSV", but I want to automate that task (as it is the first of many steps to updating some reports). Can this be done automatically using Python?
Note: I did not create the dashboards in Quicksight, I just have privileges to see them.
Since you approach datasets stored in a different locations (quicksight is injected with them, from S3, Athena tables etc., exporting from quicksight is not the way. if you'll know location of dataset/datasource, go there. for example, S3 to csv
I am developing a web application in which users can upload excel files. I know I can use the OPENROWSET function to read data from excel into a SQL Server but I am refraining from doing so because this function requires a file path.
It seems kind of indirect as I am uploading a file to a directory and then telling SQL Server go look in that directory for the file instead of just giving SQL Server the file.
The other option would be to read the Excel file into a pandas dataframe and then use the to_sql function but pandas read_excel function is quite slow and the other method I am sure would be faster.
Which of these two methods is "correct" when handling file uploads from a web application?
If the first method is not frowned upon or "incorrect", then I am almost certain it is faster and will use that. I just want an experienced developers thoughts or opinions. The webapp's backend is Python and flask.
If I am understanding your question correctly, you are trying to load the contents of an xls(s) file into a SQLServer database. This is actually not trivial to do, as depending on what is in the Excel file you might want to have one table, or more probably multiple tables based on the data. So I would step back for a bit and ask three questions:
What is the data I need to save and how should that data be structured in my SQL tables. Forget about excel at this point -- maybe just examine the first row of data and see how you need to save it.
How do I get the file into my web application? For example, when the user uploads a file you would want to use a POST form and send the file data to your server and your server to save that file (for example, either on S3, or in a /tmp folder, or into memory for temporary processing).
Now that you know what your input is (the xls(x) file and its location) and how you need to save your data (the sql schema), now it's time to decide what the best tool for the job is. Pandas is probably not going to be a good tool, unless you literally just want to load the file and dump it as-is with minimal (if any) changes to a single table. At this point I would suggest using something like xlrd if only xls files, or openpyxl for xls and xlsx files. This way you can shape your data any way you want. For example, if the user enters in malformed dates; empty cells (should they default to something?); mismatched types, etc.
In other words, the task you're describing is not trivial at all. It will take quite a bit of planning and designing, and then quite a good deal of python code once you have your design decided. Feel free to ask more questions here for more specific questions if you need to (for example, how to capture the POST data in a file update or whatever you need help with).
I have data in an excel file that I would like to use to create a case in PSSE. The data is organized as it would appear in a case in PSSE (ie. for bus Bus number, name, base kV, and so on. Of course the data can be entered manually but I'm working with over 500 buses. I have tried copied and pasting, but that seems to works only sometimes. For machine data, it barely works.
Is there a way to import this data to PSSE from an excel file? I have recently started running PSSE with Python, and maybe there is a way to do this?
--
MK.
Yes. You can import data from an excel file into PSSE using the python package xlrt, however, I would reccomend instead converting your excel file to csv before you import and use csv as it is much easier. Importing data using the API is not just a copy and paste job, into the nicely tabulated spreadsheet that PSSE has in its case data.
Refer to the API documentation for PSSE, chapter II. Search this function, BUS_DATA_2. You will see that you can create buses with this function.
So your job should be three fold.
Import the csv file data with each line being a list of each data parameter for your bus. Like voltage, name, baseKV, PU etc. Store it to another list.
Iterate through the new list you just created and call:
ierr = bus_data_2(i, intgar, realar, name)
and pass in your data from the csv file. (see PSSE API documentation on how to do this) This will effectively load data from the csv file to your case ( in the form of nodes or buses).
After you are finished, you will need to call a function called psspy.save("Casename.sav") to save your work in a new PSSE case.
Note: there are functions to load in line data, fix shunt data, generator data etc.
Your other option is to call up the PTI folks as they can give you training.
Good luck
If you have an Excel data file with exactly the same "format" and same "info" as the regular case file (.sav), try this:
Open any small example .sav file from the example sub-folder PSSE's installation folder
Copy the corresponding spreadsheet to the working case (shown in spreadsheet view) with the same "info" (say, bus, branch,etc.) in PSSE GUI
After finishing copying everything, then save the edited working case in GUI as a new working case.
If this doesn't work, I suggest you to ask this question on forum of "Python for Power Systems":
https://psspy.org/psse-help-forum/questions/
How do you build a Excel RTDServer in Python/IronPython. (IE I want to implement the IRTDServer interface in Python/IronPython. So I can push data into Excel in real time from Python/IronPython)
I have looked all over the place but the only examples that I'm able to find are in C#.
You can use pyrtd that implements an Excel RTD Server.
You can find the code on Google Code page of the project