I use fabric and have:
put('/projects/configuration-management/prototype','/etc/nginx/sites-available')
The result is:
Underlying exception:
Permission denied
Aborting.
Other configuration files can be uploaded easily. How could I avoid my issue?
It looks like you need super user permission, run it using sudo and it will work just fine
In the docs (link here) says:
While the SFTP protocol (which put uses) has no direct ability to
upload files to locations not owned by the connecting user, you may
specify use_sudo=True to work around this. When set, this setting
causes put to upload the local files to a temporary location on the
remote end (defaults to remote user’s $HOME; this may be overridden
via temp_dir), and then use sudo to move them to remote_path.
Related
Problem statement:
I have a python 3.8.5 script running on Windows 10 that processes large files from multiple locations on a network drive and creates .png files containing graphs of the analyzed results. The graphs are all stored in a single destination folder on the same network drive. It looks something like this
Source files:
\\drive\src1\src1.txt
\\drive\src2\src2.txt
\\drive\src3\src3.txt
Output folder:
\\drive\dest\out1.png
\\drive\dest\out2.png
\\drive\dest\out3.png
Occasionally we need to replot the original source file and examine a portion of the data trace in detail. This involves hunting for the source file in the right folder. The source file names are longish alphanumerical strings so this process is tedious. In order to make it less tedious I would like to creaty symlinks to the orignal source files and save them side by side with the .png files. The output folder would then look like this
Output files:
\\drive\dest\out1.png
\\drive\dest\out1_src.txt
\\drive\dest\out2.png
\\drive\dest\out2_src.txt
\\drive\dest\out3.png
\\drive\dest\out3_src.txt
where \\drive\dest\out1_src.txt is a symlink to \\drive\src1\src1.txt, etc.
I am attempting to accomplish this via
os.symlink('//drive/dest/out1_src.txt', '//drive/src1/src1.txt')
However no matter what I try I get
PermissionError: [WinError 5] Access is denied
I have tried running the script from an elevated shell, enabling Developer Mode, and running
fsutil behavior set SymlinkEvaluation R2R:1
fsutil behavior set SymlinkEvaluation R2L:1
but nothing seems to work. There is absolutely no problem creating the symlinks on a local drive, e.g.,
os.symlink('C:/dest/out1_src.txt', '//drive/src1/src1.txt')
but that does not accomplish my goals. I have also tried creading links on the local drive per above then then copying them to the network location with
shutil.copy(src, dest, follow_symlinks = False)
and it fails with the same error message. Attempts to accomplish the same thing directly in the shell from an elevated shell also fail with the same "Access is denied" error message
mklink \\drive\dest\out1_src.txt \\drive\src1\src1.txt
It seems to be some type of a windows permission error. However when I run fsutil behavior query SymlinkEvaluation in the shell I get
Local to local symbolic links are enabled.
Local to remote symbolic links are enabled.
Remote to local symbolic links are enabled.
Remote to remote symbolic links are enabled.
Any idea how to resolve this? I have been googling for hours and according to everything I am reading it should work, except that it does not.
Open secpol.msc on PC where the newtork share is hosted, navigate to Local Policies - User Rights Assignment - Create symbolic links and add account you use to connect to the network share. You need to logoff from shared folder (Control Panel - All Control Panel Items - Credential Manager or maybe you have to reboot both computers) and try again.
I'm running into an issue without volume mounting, combined with the creation of directories in python.
Essentially inside my container, I'm writing to some path /opt/…, and I may have to make the path (which I'm using os.makedirs for)
If I mount a host file path like -v /opt:/opt, with bad "permissions" where the docker container does not seem to be able to write to, the creation of the path inside the container DOES NOT FAIL. The makedirs(P) works, because inside the container, it can make the dir just fine, because it has sudo permissions. However, nothing gets written, silently, on the host at /opt/…. The data just isn't there, but no exception is ever raised.
If I mount a path with proper/open permissions, like -v /tmp:/opt, then the data shows up on the host machine at /tmp/… as expected.
So, how do I not silently fail if there are no write permissions on the host on the left side of the -v argument?\
EDIT: my question is "how do I detect this bad deployment scenario, crash, and fail fast inside the container, if the person who deploys the container, does it wrong"? Just silently not writing data isn't acceptable.
The bad mount is root on the host right, and the good mount is the user in the Docker group on the host? Can you check the user/group of the mounted /opt? It should be different than that of /tmp.
You could expect a special file to exist in /opt and fail if it's not present
if not os.path.exists(PATH):
raise Exception("could not find PATH: file missing or failed to mount /opt")
I want to check if a certain file fake-file.txt exists on a shared folder //123.456.7.890/Data/ and if it does I want to remove it and write a new file real-file.txt. I do not want to use paramiko module for this task and got it working on a windows machine like this:
filename = '//123.456.7.890/Data/fake-file.txt'
if os.exists(filename):
os.remove(filename)
#and so on
However, this method does not work on a unix based machine (CentOS in this case). I get an IOError that the file doesnt exists. I am not really familiar with unix based machines so there is probably something going wrong with the reference. How can I fix this problem? If something is unclear, let me know!
PS. the folder is password protected and I am able to ssh to it from the terminal
It's not really a Python question. It's that on Linux you access remote filesystems by mounting them onto a local empty directory (which may require privileges you don't have) and then access them through that directory (then known as a mountpoint). Something like
$ mkdir ./1234567890
$ mount -t cifs //123.456.7.890/Data -o username=username,password=password ./1234567890
if this succeeds, the linux filename you use inside your Python program will be ./1234567890/fake-file.txt.
Some Linux systems may be configured with an automounter system so that particular filestore references automagically do the mount for you. You'll need to talk to your local system management to establish whether and how this is implemented locally.
PS 123.456.7.890 is not a valid IP address but I'm assuming you chose to hide the real and valid IP address that you were actually using. 123.45.67.89 is a better choice for a "random" IP address, or 192.168.22.33 (random private IP).
I've got some code which needs to grab code from github periodically (on a Windows machine).
When I do pulls manually, I use GitBash, and I've got ssh keys running for the repos I check so everything is fine. However when I try to run the same actions in a python subprocess I don't have the ssh services which GitBash provides and I'm unable to authenticate to the repo.
How should I proceed from here. I can think of a couple of different options:
I could revert to using https:// fetches. This is problematic because the repos I'm fetching use 2-factor authentication and are going to be running unattended. Is there a way to access an https repo that has 2fa from a command line?
I've tried calling sh.exe with arguments that will fire off ssh-agent and then issuing my commands so that everything is running more or less the way it does in gitBash, but that doesn't seem to work:
"C:\Program Files (x86)\Git\bin\sh.exe" -c "C:/Program\ Files\ \(x86\)/Git/bin/ssh-agent.exe; C:/Program\ Files\ \(x86\)/Git/bin/ssh.exe -t git#github.com"
produces
SSH_AUTH_SOCK=/tmp/ssh-SiVYsy3660/agent.3660; export SSH_AUTH_SOCK;
SSH_AGENT_PID=8292; export SSH_AGENT_PID;
echo Agent pid 8292;
Could not create directory '/.ssh'.
The authenticity of host 'github.com (192.30.252.129)' can't be established.
RSA key fingerprint is XXXXXXXXXXX
Are you sure you want to continue connecting (yes/no)? yes
Failed to add the host to the list of known hosts (/.ssh/known_hosts).
Permission denied (publickey).
Could I use an ssh module in python like paramiko to establish a connection? It looks to me like that's only for ssh'ing into a remote terminal. Is there a way to make it provide an ssh connection that git.exe can use?
So, I'd be grateful if anybody has done this before or has a better alternative
The git bash set the HOME environment variable, which allows git to find the ssh keys (in %HOME%/.ssh)
You need to make sure the python process has or define HOME to the same PATH.
As explained in "Python os.environ[“HOME”] works on idle but not in a script", you need to set HOME to %USERPROFILE% (or, in python, to os.path.expanduser("~") ).
I have a Python script running on the default OSX webserver, stored in /Library/WebServer/CGI-Executables. That script spits out a list of files on a network drive using os.listdir.
If I just execute this from the terminal, it works as expected, but when I try to access it through a browser (computer.local/cgi-bin/test.py), I get a permissions error:
<type 'exceptions.OSError'>: [Errno 13] Permission denied: '/Volumes/code/code/_sendrender/queue/waiting'
Is there any way to give the script permission to access the network when accessed via CGI?
I don't know much about the default osx webserver, but the webserver process is probably being run as some user, that user needs to be able to access those files. To find out who the user is you can use the ps command. Then depending on the configuration of the network shared drive, you can add this user to the users allowed to access this data.