I m using shutil.copy from python to copy a list of files. But when i copy the files to /usr/lib/ location, i m getting permission denied as i need to be an administrator to do that.
So How could i copy files with admin permission or
how could i get the admin password from the user to copy the files?
Ideas would be appreciated
Make the user run the script as an administrator:
sudo python-script.py
Unix already has authentication and password management. You don't need to write your own, and there will doubtless be security bugs if you try.
To add to what katrielalex said: you can make the script run itself via sudo if you want. Here's a proof of concept:
import sys, os, subprocess
def do_root_stuff():
print('Trying to list /root:')
for filename in os.listdir('/root'):
print(filename)
if __name__ == '__main__':
print('Running as UID %d:' % os.geteuid())
if os.geteuid() == 0:
do_root_stuff()
else:
subprocess.check_call(['sudo', sys.executable] + sys.argv)
Start your program with a user that is allowed to write there. For example login to root first (su) or run the script with sudo myscript.py.
I came her looking for an alternative way of doing things.
A quick and dirty hack I use, because I don't want my whole script to run as root:
try:
shutil.os.remove(file1)
except PermissionError:
shutil.os.system('sudo chown $USER "{}"'.format(file1))
# try again
try:
shutil.os.remove(file1)
except:
print('Giving up on'.format(file1))
Which is probably not completely error-prone, but they work for the quick scripts I hack together
Oops, I saw you were asking for copy permissions.
But you could apply the same logic
try:
shutil.os.copy(file1,destination)
except PermissionError:
shutil.os.system('sudo cp "{}" "{}"'.format(file1,destination))
Related
I'm running a Flask application on Apache2 server on Ubuntu. The application would take input from a form and save it to a text file. The file exists only for the moment when it's uploaded to S3. After that it's deleted.:
foodforthought = request.form['txtfield']
with open("filetos3.txt", "w") as file:
file.write(foodforthought)
file.close()
s3.Bucket("bucketname").upload_file(Filename = "filetos3.txt", Key = usr+"-"+str(datetime.now()))
os.remove("filetos3.txt")
but the app doesn't have permission to create the file:
[Errno 13] Permission denied: 'filetos3.txt'
I already tried to give permissions to the folder where the app is located with:
sudo chmod -R 777 /var/www/webApp/webApp
but it doesn't work
My guess is that the application is run from a different location. What output do you get from this:
import os
print(os.getcwd())
It's for that directory you need to set permissions. Better yet, use an absolute path. Since the file is temporary, use tempfile as detailed here.
foodforthought = request.form['txtfield']
with tempfile.NamedTemporaryFile() as fd:
fd.write(foodforthought)
fd.flush()
# Name of file is in the .name attribute.
s3.Bucket("bucketname").upload_file(Filename = fd.name, Key = usr+"-"+str(datetime.now()))
# The file is automatically deleted when closed, which is when the leaving the context manager.
Some final notes: You don't need to close the file, since you use a context manager. Also, avoid setting 777 recursively. The safest way is to set +wX in order to only set execute bit on directories and write bit on everything. Or better yet, be even more specific.
RELATED: Python multiprocessing: Permission denied
I want to use Python's multiprocessing.Pool
import multiprocessing as mp
pool = mp.Pool(3)
for i in range(num_to_run):
pool.apply_async(popen_wrapper, args=(i,), callback=log_result)
I get OSError
File "/usr/local/lib/python2.6/multiprocessing/__init__.py", line 178, in RLock
return RLock()
File "/usr/local/lib/python2.6/multiprocessing/synchronize.py", line 142, in __init__
SemLock.__init__(self, RECURSIVE_MUTEX, 1, 1)
File "/usr/local/lib/python2.6/multiprocessing/synchronize.py", line 49, in __init__
sl = self._semlock = _multiprocessing.SemLock(kind, value, maxvalue)
OSError: [Errno 13] Permission denied
I read in the related question that it's due to not having r/w to /dev/shm
Besides changing the permission in /dev/shm, is there a way to run as root in the code?
I initially thought you could do something like os.umask() but it didnt work
EDIT (rephrasing the question):
let's say a username A has r/w access to directory A
You are user B and your program needs access to directory A. how do you run a program as user A?
In order from the least dangerous to the most dangerous.
You can try dropping permissions as John Zwinck suggested.
Basically you would start the program with root level permissions,
immediately do what you need to do, and then switch to a non-root
user.
From this StackOverflow.
import os, pwd, grp
def drop_privileges(uid_name='nobody', gid_name='nogroup'):
if os.getuid() != 0:
# We're not root so, like, whatever dude
return
# Get the uid/gid from the name
running_uid = pwd.getpwnam(uid_name).pw_uid
running_gid = grp.getgrnam(gid_name).gr_gid
# Remove group privileges
os.setgroups([])
# Try setting the new uid/gid
os.setgid(running_gid)
os.setuid(running_uid)
# Ensure a very conservative umask
old_umask = os.umask(077)
You could also require the credentials for the root user to be
inputed into the script, and then only use them when they are
required.
subprocess.call("sudo python RunStuffWithElevatedPrivelages.py")
#From here, the main script will continue to run without root permissions
Or if you don't want the script to prompt the user for the password you can do
subprocess.call("echo getRootCredentials() | sudo -S python RunStuffWithElevatedPrivelages.py")
Or you could just run the entire program as a root user -- sudo python myScript.py.
As far as temporarily giving users root permission to /dev/shm only when they run your script, the only thing I could think of was having some script that runs in the background under the root user that can temporarily grant anyone who uses your script root privileges to /dev/shm. This could be done through using setuid to grant such permissions and then after a certain amount of time or if the script ends the privilege is taken away. My only concern would be if there is a way a user who has temporarily been given such permissions might be able to secure more permanent privileges.
I am attempting to use pathlib to recursively glob and/or find files. File permissions and groups are all over the place due to poor management of the filesystem which is out of my control.
The problem occurs when I lack both permissions and group membership to a directory that rglob attempts to descend into. Rglob throws a KeyError and then a PermissionError and finally stops entirely. I see no way to recover gracefully from this and continue globbing.
The behavior that I want is for rglob to skip directories that I don't have permissions on and to generate the list of everything that it saw/had permissions on. The all or nothing nature isn't going to get me very far in this particular case because I'm almost guaranteed to have bad permissions on some directory or another on every run.
More specifics:
Python: 3.4.1 compiled from source for linux
Filesystem I am globbing on: automounted nfs share
How to reproduce:
mkdir /tmp/path_test && cd /tmp/path_test && mkdir dir1 dir2 dir2/dir3 && touch dir1/file1 dir1/file2 dir2/file1 dir2/file2 dir2/dir3/file1
su
chmod 700 dir2/dir3/
chown root:root dir2/dir3/
exit
python3.4.1
from pathlib import Path
p = Path('/tmp/path_test')
for x in p.rglob('*') : print(x)
At first I tried to manually iterate though the results of rglob() like this:
from pathlib import Path
p = Path('/tmp/path_test')
files = p.rglob('*')
while True:
try:
f = next(files)
except (KeyError, PermissionError):
continue
except StopIteration:
break
print(f)
But it looks like next(files) throws a StopIteration after the first PermissionError, so I don't get any files after that.
You may be better off using os.walk().
When I execute the following code as root
import os
try:
if os.getuid() == 0:
import pwd, grp
os.setgroups([])
os.setgid(grp.getgrnam('my_user').gr_gid)
os.setuid(pwd.getpwnam('my_group').pw_uid)
os.umask(077)
print 'dropped privileges successfully'
else:
print 'no need to drop privileges'
except:
print 'unable to drop privileges'
print os.system('ls -lsa ~/')
then the last statement prints ls: cannot open directory /root/: Permission denied.
The cause is clear, but the question is: What do I need to do so that ~ will expand to /home/my_user?
In this case, where I needed root privileges in order to bind to privileged ports, it turned out to be the wrong approach to start the server as root.
Using authbind turned out to be the proper solution, so that sudo python server.py ended up being authbind python server.py.
I've read that authbind seems to have some problems with ipv6, more helpful information may be found at Is there a way for non-root processes to bind to “privileged” ports (<1024) on Linux?
This question already has answers here:
Deleting read-only directory in Python
(7 answers)
Closed 3 years ago.
I'm trying to have python delete some directories and I get access errors on them. I think its that the python user account doesn't have rights?
WindowsError: [Error 5] Access is denied: 'path'
is what I get when I run the script.
I've tried
shutil.rmtree
os.remove
os.rmdir
they all return the same error.
We've had issues removing files and directories on Windows, even if we had just copied them, if they were set to 'readonly'. shutil.rmtree() offers you sort of exception handlers to handle this situation. You call it and provide an exception handler like this:
import errno, os, stat, shutil
def handleRemoveReadonly(func, path, exc):
excvalue = exc[1]
if func in (os.rmdir, os.remove) and excvalue.errno == errno.EACCES:
os.chmod(path, stat.S_IRWXU| stat.S_IRWXG| stat.S_IRWXO) # 0777
func(path)
else:
raise
shutil.rmtree(filename, ignore_errors=False, onerror=handleRemoveReadonly)
You might want to try that.
I've never used Python, but I would assume it runs as whatever user executes the script.
The scripts have no special user, they just run under the currently logged-in user which executed the script.
Have you tried checking that:
you are trying to delete a valid path? and that
the path has no locked files?
How are you running the script? From an interactive console session? If so, just open up a DOS command window (using cmd) and type 'whoami'. That is who you are running the scripts interactively.
Ok I saw your edits just now...why don't you print the path and check the properties to see if the user account running the scripts has the required privileges?
If whoami does not work on your version of Windows, you may use the environment variables like SET USERNAME and SET DOMAINNAME from your command window.
#ThomasH : another brick to the wall.
On unix systems, you have to ensure that parent directory is writeable too.
Here is another version :
def remove_readonly(func, path, exc):
excvalue = exc[1]
if func in (os.rmdir, os.remove) and excvalue.errno == errno.EACCES:
# ensure parent directory is writeable too
pardir = os.path.abspath(os.path.join(path, os.path.pardir))
if not os.access(pardir, os.W_OK):
os.chmod(pardir, stat.S_IRWXU| stat.S_IRWXG| stat.S_IRWXO)
os.chmod(path, stat.S_IRWXU| stat.S_IRWXG| stat.S_IRWXO) # 0777
func(path)
else:
raise
If the script is being run as a scheduled task (which seems likely for a cleanup script), it will probably run as SYSTEM. It's (unwise, but) possible to set permissions on directories so that SYSTEM has no access.
Simple solution after searching for hours is to check first if that folder actually exist!
GIT_DIR="C:/Users/...."
if os.path.exists(GIT_DIR):
shutil.rmtree(GIT_DIR)
This did the trick for me.