Windows permission error to delete joblibmem mapping folder in python - python

I am trying to run a python code that performs XGboosting and I wanted it to run parallely to take less time in building a model. But I am facing this issue while running a code.
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\<<username>>\\AppData\\Local\\Temp\\joblib_memmapping_folder_85680_9566857635\\85680-1746225537432-968de5958f0642829c37f0f09f0e8b00.pkl'
I have even tried running the anaconda prompt as an administrator. But it is of no use. As a workaround I have also tried what is suggested in https://github.com/joblib/joblib/issues/806 but even then I am facing the same issue.
Could you please advice?

Related

Cannot get Perforce Triggers to work properly

Been trying to get this working all day, and i just can't figure out why its not working.
Trying to implement a simple trigger to run when the user submitted a file.
example in the .tmp file:
hello_trigger change-submit //testDepot/... "python F:/triggers/hello_trigger.py"
when i try to submit a file i get this:
Submit validation failed -- fix problems then use 'p4 submit -c 10199'.
'hello_trigger' validation failed: python: can't open file 'F:/triggers/hello_trigger.py': [Errno 2] No such file or directory
File exists and can be read, so its not a python issue.. same error with a .txt or .bat file.
From what i can gather the problem seems to be coming from the depot line in the trigger.
//testDepot/... fails
//depot/... doesnt fail, but the script is never fired off.
Any suggestions are greatly appreciated.
also testDepot is a stream not sure if that matters.
python: can't open file 'F:/triggers/hello_trigger.py': [Errno 2] No such file or directory
seems pretty clear that the file doesn't exist, at least from the point of view of this trigger command. Some things to double check:
This is running on the server machine, i.e. the place where the p4d service is running. If you have the script on your client machine the Python executable on the server isn't going to be able to find it!
Similarly, this is being run by whatever user is running p4d (on Windows this is usually the SYSTEM user, which may have limited permissions). Does that user have permission to read this path?
Could it be that your version of Python on Windows doesn't know how to handle paths with Unix-style forward slashes? (Many tools will normalize these for you but you shouldn't depend on it!) Try using a valid Windows path, i.e. F:\triggers\hello_trigger.py.

Pytest and virtualenv || Failed to configure container: [Errno 1]

I am currently working on a python program for finding flaky tests by running them multiple times. To achieve this goal, I'm executing the tests in a virtualenv in random order using pytest.
When I execute the program on a remote machine via slurm job, I get following error codes:
2019-11-26 18:18:18,642 - CRITICAL - Failed to configure container: [Errno 1] Creating overlay mount for '/' failed: Operation not permitted. Please use other directory modes, for example '--read-only-dir /'.
2019-11-26 18:18:18,777 - CRITICAL - Cannot execute 'pytest': execution in container failed.
This doesn't happen on my local machine, only on the task startet via the slurm job.
This is my first time working with python at this complexity, so I'm not really sure where to start solving the problem.
Thanks a lot in advance!
I finally figured out that the problem only occurs on the newest version of benchexec.
When my python program executes run exec --no-container --pytest inside a virtualenv and benchexec is version 2.0 or higher, the error message displayed in my original post shows up. I simply tell pip to install an older version of benchexec in my virtualenv and voila it works.
I would've created the tag benchexec but don't have the necessary credit points for doing so. Feel free to do it for me!

Read from "locked" file in windows using python

I have written a python script to process the output file of another program and run various stats on it. Right now, when I attempt to access that file from my python script:
with open('C:\\my_file_path', 'rb') as outfile:
print(outfile)
I receive an error message:
PermissionError: [Errno 13] Permission denied: 'C:\my_file_path'
When using other programs (specifically HxD, the hex editor), Windows gives a more verbose error popup stating:
The process cannot access the file because it is being used by
another process.
Running the program as an Administrator, or with sudo from within WSL Ubuntu does not make any difference.
Is there any way to read the data which is being written to this file despite these locking conditions? I cannot mess with the first program as it's a low-level device driver for which I do not have source code. It essentially records data from a hardware sensor and writes it to a file for several hours so being able to concurrently parse that file in python (rather than waiting until the hours-long recording is over) would be much better.

PermissionError while running python program on windows

I've been tinkering with the os module for a few days encountered this error. Can't seem to fix it.
Here is an example:
import os
os.chdir('C:\\Users\\User\\Desktop')
os.rename('odin', 'ddin')
print (os.listdir())
And this is the error:
PermissionError: [WinError 5] Access is denied: 'odin' -> 'ddin'
Any help?
you are running your python program from an user that does not have permissions to change the specific file name ' try running from another user or change the file permissions to allow writing to your user.
First, try running the same program in IDLE with administrator privileges.
Second, There is chance that your anti-virus software is blocking your python script so try disabling antivirus.

'std::bad_alloc' after mistakenly changing /usr/ permissions

I'm working on a Linux machine running Ubuntu Bionic Beaver, release 18.04.
The other day I mistakenly changed the /usr/ directory to be owned by a user, instead of root. Unfortunately, I did that recursively, and so messed quite a bit of the system up because it also changed the suid permissions on some of the commands (e.g. passwd, sudo). We really can't reinstall (well we can but it'll cost!), so I booted from a LiveUSB, and changed manually all the correct user/group/permissions for each file that I could identify had a non-Root:Root User:Group. I did this by comparing the output of another Ubuntu computer of ls -lha /usr/.
It seems to be mostly fixed, but now I'm running into the error 'std::bad_alloc' after running some pretty standard python scripts. The strange part about this is that it only comes up sometimes. For example, if I open python from the command line and copy and paste code, the code will all run fine with no error. However if I run the entire script from the command line (e.g. python script.py) then I get this error. The full error message is:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)
But to add another twist - sometimes I can run the same python script from the command line with no problem, and others I get this error as above.
If anybody has ideas as to where to specifically look to fix this that'd be great! I'm going to try and do the same thing as before but with the ls -lha /usr/ output from an 18.04 release, as I only had a 16 release output on hand.

Categories

Resources