windows server file permission error when using watchdog - python

I have a python code that uses watchdog and pandas to automatically upload a newly added excel file once it has been pasted on a given path.
The code works well on my local machine but when I run it to access files on windows server 2012 r 2, I am getting a file permission error. what can be the best solution?
NB: I am able to access the same files using pandas read_excel() without using the watchdog but I want to automate the process so that it auto reads the files every time files are being uploaded

Few possible reasons that you get a permission deny
The file has been lock because someone is opening it.
Your account doesn't have the permission to read/write/execute

Related

When using Python to save files, can I skip the uploading to Onedrive step and keep it in the background?

I am trying to use python to save files in a local directory which is linked to Onedrive.
My python script wouldn't stop running until it finishes the uploading to onedrive process, which could take a while when the file is large (>200 MB).
My question is, can I save the file to local OneDrive folder as an end to my script, and let OneDrive do the uploading job in the background, like what OneDrive usually does if you save/copy/move files in Windows?
For example, if I use shutil to copy a file between the same folders, it won't invoke the OneDrive uploading process when the script was run. Once the script is finished, OneDrive is updating the changes in the background.
import win32com.client as win32
excel = win32.gencache.EnsureDispatch('Excel.Application')
wb = excel.Workbooks.Open("C:/Users/OneDrive/my scripts/Example.xlsx")
wb.SaveAs("C:/Users/OneDrive/my scripts/files/New.xlsx")
excel.Application.Quit()
I skipped the manipulations to the excel file as they are not relevant.
import shutil
shutil.copyfile("C:/Users/OneDrive/my scripts/Example.xlsx", "C:/Users/OneDrive/my scripts/files/New.xlsx")
It looks like OneDrive will synchronize with the cloud automatically. If you're needs are time-critical, you can force a sync using the %localappdata%\Microsoft\OneDrive\onedrive.exe command with the /reset option.

Refresh All Data on Excel Workbook in Sharepoint Using Python

To start I managed to successfully run pywin32 locally where it opened the Excel workbooks and refreshed the SQL Query then saved and close them.
I had to download those workbooks locally from Sharepoint and have them sync to apply the changes using one drive.
My Question is would this be possible to do within Sharepoint itself ? Have a python script scheduled on a server and have the process occur there in the backend through a command.
I use this program called Alteryx where I can have batch files execute scripts and maybe I could use an API of some sort to accomplish this on a scheduled basis since thats the only server I have access to.
I have tried looking on this site and other sources but I can't find a post where it would reference this specifically.
I use Jupyter Notebooks to write my scripts and Alteryx to build a workflow with those scripts but I can use other IDEs if I need to.

Create Azure webjobs from Django/python web app

I have a Django app deployed on a Azure Web App, and I want to dynamically create webjobs. More precisely, when I save a Django model called Importer, I want to be able to create a new web job, made of 3 files:
run.py : the Python webjob
settings.job : the cron schedule
config.ini : a config file for the webjob
The content of the "settings.job" & "config.ini" comes from an admin form and is specific to the webjob.
When I save my model, the code creates a new directory in
App_Data\jobs\triggered{my job name}
, and copies there the "run.py" from my app directory.
This works. However, when I try to create a new text file called "settings.job" in the job directory and write the cron schedule in it, I got a server error.
I tried many things, but the following basic test causes a failure:
file = open('test.txt','w')
file.write('this is a test')
file.close()
It seems that I don't have the right to write a file to the disk. How can that be solved?
Also, I want to be able to modify the content of the config and settings.job files when I update the corresponding Django model.
In addition, I tried to copy another file called "run2.py" to the webjob directory, and that fails too ! I cannot copy another file that run.py in that directory
According to your description, per my experience, I think the issue was caused by using the relative path in your code.
On Azure WebApps, we have the permission of doing any operations under the path D:/home.
My suggestion is using the absolute path instead of the relative path, such as D:/home/site/wwwroot/App_Data/jobs/triggered/<triggered-job-name>/test.txt or /home/site/wwwroot/App_Data/jobs/triggered/<triggered-job-name>/test.txt instead of test.txt. And please make sure the directory & parent directories had been made via use os.path.exists(path) to check the path exists and use os.makedirs to make them before writing a file.
Meanwhile, you also can try to use the WebJobs API to do some operations, such as creating a webjob via uploading a zip file or updating settings.job. For using these WebJobs APIs, be sure you had configured the Deployment credentials on Azure portal as the figure below, and add the basic auth Authorization: Basic <BASE64-Encode("deployment-username":"password")>in the request header.
Here is my analysis:
there is no problem copying other files than "run.py" to the webjob directory
the code crashes after (successfully) copying the file "run.py" from my Django app directory to the webjob directory. It does not matter if I use shutil.copy/shutil.copyfile or simply open("{path}/run.py","w"), the problem occurs when I try to write a file called "run.py" to the disk in the webjob directory.
I think that when we create a new webjob directory, if the system detects a file called "run.py" it tries to carry out some operations. Then there is a conflict with two processes trying to access the same file at the same time.
My solution: I copy the python script to the webjob directory with the name "myrun.txt", and then I rename it to run.py using os.rename
It works !

Running an exe file on the azure

I am working on an azure web app and inside the web app, I use python code to run an exe file. The webapp recieves certain inputs (numbers) from the user and stores those inputs in in a text file. Afterwards, an exe file would run and read the inputs and generate another text file, called "results". The problem is that although the code works fine on my local computer, as soos as I put it on azure, the exe file does not get triggered by the following line of code:
subprocess.call('process.exe',cwd = case_directory.path, shell= True)
I even tried running the exe file on Azure manually from the Visual Studio Team Services (was Visual Studio Online) by "running from Console" option. It just did not do anything. I'd appreciate if anyone can help me.
Have you looked at using a WebJob to host\run your executable from? A WebJob can be virtually any kind of script or win executable. There are a number of ways to trigger your WebJob. You also get a lot of buit in monitoring and logging for free as well, via the Kudu interface.
#F.K I searched some information which may be helpful for you, please see below.
Accroding to the python document for subprocess module, Using shell=True can be a security hazard. Please see the warning under Frequently Used Arguments for details.
There is a comment in the article which gave a direction for the issue, please see the screenshot below.
However, normally, the recommended way to satisfy your needs is using Azure Queue & Blob Storage & Azure WebJobs to save the input file into a storage queue, and handling the files got from queue and save the result files into blob storage by a continuous webjob.

Best practice for watching and reliable uploading files in Python?

I'm building a desktop application for Windows in Python 2.7. The primary function of this application is to watch a folder for new files. Whenever a new file appears in this folder the app uploads it to remote server. The process on the remote server creates a db record for the file and stores remote file path in that record.
Currently I'm using watchdog to monitor directory and httplib for file upload.
What approach should I take to ensure that a new file will be uploaded reliably regardless of a network condition or internet connection loss?
Update: What I mean by reliable upload is that the app will upload the file even if the app restarts. Like Dropbox. Some files are quite big (> 100 MB) so simple solutions like wrapping the code in try / catch and starting the upload all over is not very efficient. I know Dropbox uses librsync, but it might be overkill in this case.
What if the source file has been changed during the upload? Should I stop the upload and start over?
You could maintain file or database of files names, timestamps and information about their upload status. Based on that data You will know what files were already sent and what to upload after any restart of application or computer.
Checking timestamps tells You that file has been modified and upload process should be started over.

Categories

Resources