I need to delete a folder under Windows. Sometimes there are processes who prevents me from deleting said folders (their current directory is the folder, they are using one the files in the folder etc)
I'm looking for a way to loop over all processes that are using any files in my directory - then I will kill them and delete the folder.
I'm coding in Python but I guess I should be looking for some Windows Internals to do the job...
Related
I am using Anaconda python distributin. Under Scripts folder, I see several ~.conda_trash files. Can these files be safely deleted?
I am using Windows 10, anaconda 2020_07.
The .conda_trash file are generated on windows when conda tries to delete folder containing in-use files. As windows can't delete files that are in use (i think linux users don't meet the .conda_trash problem).
There is a delete_trash function at boot that scans the entire tree in search for those files and deletes them.
So basically conda should be able to get rid of those files by itself. But if those are not needed anymore (and take too much time at boot), it shouldn't be a poblem to manually delete them.
I have tested on my PC that the ~.conda_trash files can be deleted from Scripts folder without affecting anaconda distribution.
I know the similar questions were already answered:
What will happen if I modify a Python script while it's running?
When are .pyc files refreshed?
Is it possible to replace a python file while its running
changing a python script while it is running
but I still can't find the clear answer to my question.
I have main.py file and the other *.py modules that I import from main file. I run python.exe main.py from (Win) console and python interpreter generates *.pyc files. Than I change some *.py source files and in another console run again python.exe main.py (while the first instance is still running). Python interpreter regenerates only *.pyc files for *.py source files I changed, while the other *.pyc files remains intact.
As I understand and as answers to those questions suggest, the first instance of the running program loaded all (first version of) *.pyc files in memory, the second instance of the running program loaded all (second version of) *.pyc files in memory.
My question is:
Are there any circumstances where the first instance will need/want to reload *.pyc files to memory from disk again (some swap memory/disk or something) or it loaded *.pyc files to memory for good (until the end of running the first instance of the program). Because if there are, the first instance will then reload some of the new *.pyc files and it can probably crash.
I know I can deliberately reload the modules in my python source files:
How do I unload (reload) a module?
but I don't do that in my source files. Is there some other danger.
Why am I asking this question. I made a strategy to upgrade my python GUI program by just copying *.py files through LAN to client's shared folders. On the client (Win) PC, user can have opened for example two or three instances of python GUI program. While user is running those instances on his/her client PC, I make the upgrade (just copy some changed *.py files to their PCs through LAN). He/She closes one of those programs (aware of the upgrade or not, it doesn't matter), loads it again and python interpreter regenerates some *.pyc files. Again, is there any danger that the first two instances will ever need to reload *.pyc files or (as far as they are concerned) they are loaded into memory for good?
Just for fun, I did exactly that for test and even deleted all *.pyc files while all three instances were running and tested it and they never needed any of the *.pyc files again (they never regenerated them) in those sessions.
I just need confirmation that it works that way in order to be sure to make upgrades safely that way.
I have a directory with some files in it (they're mostly images with a JSON file). I want to process those files, possibly overwrite some of them and possibly create some new files from some other source. I can have an error happen at any point in this process.
How do I ensure that if an error occurs as I'm processing one of the files that I won't end up with the directory in a weird state? I want to only change the contents of the directory if everything went well.
Should I create a temporary directory with tempfile.mkdtemp, put my code in a try, do my update in the "temporary" directory, swap the existing directory with the temporary directory, and delete the temporary directory if it still exists in the finally?
I'm using Python (Django).
I'm using some simple code to remove a folder structure: One folder that has multiple subfolders that are populated each time the script is run, each named 1, 2, 3, etc. Inside each subfolder is a bunch of .png files. I'm running on Win10 Pro.
When using any method to remove the files and folders, windows "locks" the subfolder "1" but successfully deletes everything else. The folder becomes impossible to remove, asking permission from my own account or the Administrators group to delete it. The script cannot delete it and throws a PermissionError when it tries.
The folder disappears after a restart. Oddly, it also disappears after about 10 minutes of waiting and not doing anything.
I've used the following methods to remove the folders without success:
shutil.rmdir() normally
shutil.rmdir(onerror=fixpermission) with a function to clear read-only errors
os.chmod(file_path, 0o777) every file in the folder, os.remove() every file in the folder, then os.rmdir folders
literally just os.rmdir-ing every subfolder
Windows and Python.
Is it possible to get the working directory that a process (not under my control) was started with, after the current working directory is changed?
I suspect Windows will lose this info irremediably, but looking for a confirmation.
As specified by eryksun:
Python adds the script directory to sys.path, not the working
directory. The Windows ProcessParameters store the DosPath string and
Handle for the working directory. All traces of the initial working
directory are removed when a new working directory is set, i.e. the
DosPath string is updated and the old directory Handle is closed and
replaced by the new one. I checked whether auditing process creation
might help, but the audit event doesn't store the initial working
directory.