Creating a directory from one server to the next - python

I am using Python to create a directory remotely from one server to the next. My code entails:
executedString = "sudo ssh -i mykey.pem server_ip %s" % (name_of_directory)
os.popen(executedString)
I also tried os.system() but that didn't work. The funny thing is that, when I run this through the terminal it works. However, when I executed it from my Python script, it doesn't.
I ensured that all the files were owned by the same user group and that didn't help.
Please note that I am also running this via CGI which is where code does not get executed even though the rest of the code works otherwise.
Please advise.

I realized it was an authentication key host issue when printed out all the logs of the ssh activity. The solution to this problem was that I copied the ~/.ssh to /root directory.
I can only presume that either the key was not present in root or it was looking for auth key in root since only root can run the group/user owned by www-data.

It appears your executing a shell to a UNIX host via a PIPE. I'm thinking your command line should be the following:
executedString = "sudo ssh -i mykey.pem server_ip **mkdir** %s" % (directory_name)

Related

Running a python script which uses paramiko to ssh into other server and perform db refresh from jenkins

I have a python script that does some Database operations from an ec2 server by SSHing into another server using paramiko. The script runs fine when I run it directly from the server as ec2-user but when I run the same from Jenkins I get a permission error on /home/ec2-user/.ssh/id_rsa file.
used python3.8 /home/ec2-user/db_refresh.py command to run the script from Jenkins
After some reading and with the help of whomai command, I found that's expected since Jenkins runs the scripts as Jenkins user and no one part from the owner has permissions to read private keys in ~/.ssh/ folder.
I could change the permission so that everyone can read ec2-user's private key but I think that would be a terrible idea(As far as I've read) and I think ssh wouldn't even work if anyone apart from the owner has read permission to that private key(I remember reading it somewhere but not sure)
sshcon = paramiko.SSHClient()
sshcon.connect(MYSQL_HOST, username=SSH_USERNAME, key_filename='/home/ec2-user/.ssh/id_rsa')
That is how SSH into my database server using paramiko.
Can I run my scripts from jenkins as ec2-user or is there some other way that I can overcome this.
In the end, it turned to be quite simple(stupid me)
I just created a key pair for Jenkins user and used it for doing my operations.
One thing to note is since jenkins is service account normal su jenkins won't work. I had to do this sudo su -s /bin/bash jenkins.

How to run git fetch over ssh in a windows subprocess

I've got some code which needs to grab code from github periodically (on a Windows machine).
When I do pulls manually, I use GitBash, and I've got ssh keys running for the repos I check so everything is fine. However when I try to run the same actions in a python subprocess I don't have the ssh services which GitBash provides and I'm unable to authenticate to the repo.
How should I proceed from here. I can think of a couple of different options:
I could revert to using https:// fetches. This is problematic because the repos I'm fetching use 2-factor authentication and are going to be running unattended. Is there a way to access an https repo that has 2fa from a command line?
I've tried calling sh.exe with arguments that will fire off ssh-agent and then issuing my commands so that everything is running more or less the way it does in gitBash, but that doesn't seem to work:
"C:\Program Files (x86)\Git\bin\sh.exe" -c "C:/Program\ Files\ \(x86\)/Git/bin/ssh-agent.exe; C:/Program\ Files\ \(x86\)/Git/bin/ssh.exe -t git#github.com"
produces
SSH_AUTH_SOCK=/tmp/ssh-SiVYsy3660/agent.3660; export SSH_AUTH_SOCK;
SSH_AGENT_PID=8292; export SSH_AGENT_PID;
echo Agent pid 8292;
Could not create directory '/.ssh'.
The authenticity of host 'github.com (192.30.252.129)' can't be established.
RSA key fingerprint is XXXXXXXXXXX
Are you sure you want to continue connecting (yes/no)? yes
Failed to add the host to the list of known hosts (/.ssh/known_hosts).
Permission denied (publickey).
Could I use an ssh module in python like paramiko to establish a connection? It looks to me like that's only for ssh'ing into a remote terminal. Is there a way to make it provide an ssh connection that git.exe can use?
So, I'd be grateful if anybody has done this before or has a better alternative
The git bash set the HOME environment variable, which allows git to find the ssh keys (in %HOME%/.ssh)
You need to make sure the python process has or define HOME to the same PATH.
As explained in "Python os.environ[“HOME”] works on idle but not in a script", you need to set HOME to %USERPROFILE% (or, in python, to os.path.expanduser("~") ).

How do i give ssh keys for multiple hosts from terminal command line in fabric

I want to test various package installation on multiple hosts. Different hosts have different password/ssh-key.
I dont want to hard code host name and their ssh-key in my fab file. How can i pass multiple host and their ssh-key through terminal command line.
Code in my fab file looks like -
from fabric.api import settings, run, env
def test_installation(cmd):
run("dpkg -s %s" %cmd)
And i am calling it like -
fab test_installation:tomcat7 --hosts "user1#host1:port","vuser2#host2:port" -i "ssh-file-path for host1","ssh-file-path for host2"
Please suggest me the proper way. any help is most welcomed.
You don't provide ssh keys of hosts, but only your ssh key, that is used to register you in authorized_keys on host. And you provide only path to it (usually it is ~/.ssh/id_rsa).
Moreover, you can configure fabric to use your ssh config, so you don't need to hardcode any path at all. It can use the same keys, as it would use if you typed ssh my_host in shell.
How to do that you can find in fabric tutorial:
http://docs.fabfile.org/en/1.8/usage/execution.html#leveraging-native-ssh-config-files
http://docs.fabfile.org/en/1.8/usage/env.html#full-list-of-env-vars
You can also set your ~/.ssh/config to use different key for different host.
If you are not familiar with ssh and configuring it, please see:
http://linux.die.net/man/5/ssh_config

python change privileges to restart external application

Hi I have a python script which modifies an application configuration file. To applied this, I need to restart the application. To do that I call a init.d file. But I need to be root when I do this action otherwise the application cannot bind her on the port. Also I dont want execute all my python script with the root's privileges. How can I execute the restart with the root privileges and then remove them.
I set the user permission at the beginning with:
if __name__ == "__main__":
uid = pwd.getpwnam('ubuntu')[2]
os.setuid(uid)
app.run(host='0.0.0.0', port=5001, debug=True)
and at the end of my script I need to execute:
commands.getoutput('/etc/init.d/webapplication restart')
webapplication binds on the port 80.
If I execute my script with this configuration, webapplication cant start and return a message, "cannot bind socket on the 80".
Any idea? to have a clean solution to execute only one external command with the root privileges on a Debian server under a python script?
Thansk in advance.
P.S: I have tried to use the same method like in my main function and I have replaced the user "ubuntu" by "root" but it's not work.
You can use either of two approaches:
Create a program whose only job is to run the init.d script, and make it setuid root. (Make sure to write it securely!). The main script runs with ordinary user permissions and calls the runner.
There's no way for a program to escalate its own privileges (except by running sudo, which is at least as expensive as approach 1), but a program running as root can de-escalate itself. So you can do some steps as root and then continue as the normal user by setting your uid down to the real uid. This won't be any help if you need root privileges for the last thing the program does, though.
Finaly to reach my goal, I have use the sudo solution. To do that on a debian server:
apt-get install sudo
Edit: /etc/sudoers
Add line: my_user(uses for the setuid) ALL = NOPASSWD : /etc/init.d/webapplication
And in my python script:
commands.getoutput('sudo /etc/init.d/webapplication restart')
And that works.

Cannot write a script to "svn export" in Python

I would like to write a script that will tell another server to SVN export a SVN repository.
This is my python script:
import os
# svn export to crawlers
for s in ['work1.main','work2.main']:
cmd = 'ssh %s "cd /home/zes/ ; svn --force export svn+ssh://174.113.224.177/home/svn/dragon-repos"' % s
print cmd
os.system(cmd)
Very simple. It will ssh into work1.main, then cd to a correct directory. Then call SVN export command.
However, when I run this script...
$ python export_to_crawlers.py
ssh work1.main "cd /home/zes/ ; svn --force export svn+ssh://174.113.224.177/home/svn/dragon-repos"
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-with-mic,password).
svn: Connection closed unexpectedly
ssh work2.main "cd /home/zes/ ; svn --force export svn+ssh://174.113.224.177/home/svn/dragon-repos"
Host key verification failed.
svn: Connection closed unexpectedly
Why do I get this error and cannot export the directory? I can manually type the commands in the command line and it will work. Why can't it work in the script?
If I change to this...it will not work. and instead, nothing will happen.
cmd = 'ssh %s "cd /home/zes/ ;"' % s
This is a problem with SSH.
Permission denied, please try again.
This means that ssh can't login. Either your ssh agent doesn't have the correct key loaded, you're running the script as a different user or the environment isn't passed on correctly. Check that the variables SSH_AUTH_SOCK and SSH_AGENT_PID are passed to the subprocess of your python script.
Host key verification failed.
This error means that the remote host isn't known to ssh. This means that the host key is not found in the file $HOME/.ssh/known_hosts. Again, make sure that you're checking the home directory of the effective user of the script.
[EDIT] When you run the script, then python will become the "input" of ssh: ssh is no longer connected to a console and will ask python for the password to login. Since python has no idea what ssh wants, it ignores the request. ssh tries three times and dies.
To solve it, run these commands before you run the Python script:
eval $(ssh-agent)
ssh-add path-to-your-private-key
Replace path-to-your-private-key with the path to your private key (the one which you use to login). ssh-add will ask for your password and the ssh-agent will save it in a secure place. It will also modify your environment. So when SSH runs the next time, it will notice that an ssh agent is running and ask it first. Since the ssh-agent knows the password, ssh will login without bothering Python.
To solve the second issue, run the second ssh command manually once. ssh will then add the second host to its files and won't ask again.
[EDIT2] See this howto for a detailed explanation how to login on a remote server via ssh with your private key.
I guess that it is related to ssh. Are you using a public key to automatically connect. I think that your shell knows this key but it is not the case of python.
I am not sure but it's just an idea. I hope it helps
Check out the pxssh module that is part of the pyexpect project:
https://pexpect.readthedocs.org/en/latest/api/pxssh.html
It simplifies dealing with automating ssh-ing into machines.

Categories

Resources