Paramiko SSH python - python

I´m trying the simplest way to make a SSH connection and execute a command with paramiko
import paramiko, base64
client = paramiko.SSHClient()
client.load_system_host_keys()
client.connect('10.50.0.150', username='XXXXX', password='XXXXXX')
stdin, stdout, stderr = client.exec_command('show performance -type host-io')
for line in stdout:
print '... ' + line.strip('\n')
client.close()
------------ERROR-----------------------
Traceback (most recent call last):
File "a.py", line 5, in <module>
stdin, stdout, stderr = client.exec_command('show performance -type host-io')
File "/usr/lib/python2.6/site-packages/paramiko-1.10.1-py2.6.egg/paramiko/client.py", line 374, in exec_command
chan.exec_command(command)
File "/usr/lib/python2.6/site-packages/paramiko-1.10.1-py2.6.egg/paramiko/channel.py", line 218, in exec_command
self._wait_for_event()
File "/usr/lib/python2.6/site-packages/paramiko-1.10.1-py2.6.egg/paramiko/channel.py", line 1122, in _wait_for_event
raise e
EOFError
If i execute this code changing the command it works and to another computer, this command works fine via SSH interative shell.
Any idea ?

After client.connect(. . .) you need to use this command
session = client.get_transport().open_session()
then use session.exec_command(. . .).

Related

How to save the last output in variable when hit KeyboardInterrupt subprocess

I am completely new to the subprocess module. And I was trying to automate the deauthentication attack commands. When I run airodump-ng wlan0mon as you know it looks for the APs nearby and the connected clients to it.
Now when I try to run this command using lets suppose p = subprocess.run(["airmon-ng","wlan0mon"], capture_output=True) in Python as you know this command runs until the user hits Ctrl+C, so it should save the last output when user hits Ctrl+C in the variable but instead I get error which is this:
Traceback (most recent call last):
File "Deauth.py", line 9, in <module>
p3 = subprocess.run(["airodump-ng","wlan0"], capture_output=True)
File "/usr/lib/python3.8/subprocess.py", line 491, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "/usr/lib/python3.8/subprocess.py", line 1024, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/lib/python3.8/subprocess.py", line 1866, in _communicate
ready = selector.select(timeout)
File "/usr/lib/python3.8/selectors.py", line 415, in select
fd_event_list = self._selector.poll(timeout)
KeyboardInterrupt
What can I try to resolve this?
Just use Python's error handling. Catch any KeyboardInnterrupts (within your subprocess function) using try and except statements like so:
def stuff(things):
try:
# do stuff
except KeyboardInterrupt:
return last_value

AttributeError: 'NoneType' object has no attribute 'time' paramiko

import paramiko
key = paramiko.RSAKey.from_private_key_file("abc.pem")
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
print("connecting")
ssh.connect(hostname="1.1.1.1", username="abc", pkey=key)
print("connected")
commands = "ip a"
stdin, stdout, stderr = ssh.exec_command(commands)
print(stdout.read())
print(stderr.read())
print(stdin.read())
ssh.close()
Why have sometime will AttributeError: 'NoneType' object has no attribute 'time' in Python3.8 and sometime need wait long time show as result(or how can i see process)
Error code:
Exception ignored in: <function BufferedFile.__del__ at 0x108271ee0>
Traceback (most recent call last):
File "/venv/lib/python3.8/site-packages/paramiko/file.py", line 66, in __del__
File "/venv/lib/python3.8/site-packages/paramiko/channel.py", line 1392, in close
File "/venv/lib/python3.8/site-packages/paramiko/channel.py", line 991, in shutdown_write
File "/venv/lib/python3.8/site-packages/paramiko/channel.py", line 967, in shutdown
File "/venv/lib/python3.8/site-packages/paramiko/transport.py", line 1846, in _send_user_message
AttributeError: 'NoneType' object has no attribute 'time'
Advance
how can i use paramiko double ssh
localhost >> a(server) ssh >> b
Just close stdin
stdin, stdout, stderr = ssh.exec_command(commands)
stdin.close()
Maybe you can try something like this:
stdin, stdout, stderr = ssh.exec_command(commands)
time.sleep(5)
(don't forget to import time)
This seems to add more time to process the command
add the below:
if __name__ == "__main__":
main()
then put your code in the main() def
It's the bug opened on https://github.com/paramiko/paramiko/issues/1617. As #NobodyNada said adding a time.sleep(5) is a workaround.

Getting git fetch output to file through python

I am trying to save git fetch output to file through python, using:
subprocess.check_output(["git", "fetch", "origin", ">>", "C:/bitbucket_backup/backup.log", "2>&1"], cwd='C:/bitbucket_backup/loopx')
but I believe there is something missing in subprocess.check_output args because when adding >> C:/bitbucket_backup/backup.log 2>&1 I receive this error:
Traceback (most recent call last):
File "<pyshell#28>", line 1, in <module>
subprocess.check_output(["git", "fetch", "origin", ">>", "C://bitbucket_backup//backup.log", "2>&1"], cwd='C://bitbucket_backup//loopx')
File "C:\Users\fabio\AppData\Local\Programs\Python\Python36-32\lib\subprocess.py", line 336, in check_output
**kwargs).stdout
File "C:\Users\fabio\AppData\Local\Programs\Python\Python36-32\lib\subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['git', 'fetch', 'origin', '>>', 'C://bitbucket_backup//backup.log', '2>&1']' returned non-zero exit status 128.
Quickfix: enable shell features to handle redirection arguments:
subprocess.check_output(["git", "fetch", "origin", ">>", "C:/bitbucket_backup/backup.log", "2>&1"], cwd='C:/bitbucket_backup/loopx', shell=True)
But that's really dirty as python is able to do that really nicely:
output = subprocess.check_output(["git", "fetch", "origin"], stderr=subprocess.STDOUT, cwd='C:/bitbucket_backup/loopx')
with open("C:/bitbucket_backup/backup.log","ab") as f: # append to file
f.write(output)
That said, if you're to rewrite all git commands in python, maybe you should use a git python API like GitPython for instance.

How does Jenkins deal with python scripts with Popen

We use Jenkins to run our cronjobs. We run Centos 6.8 on our server. Jenkins is version 1.651.
I'm running into a funny problem. When I run my script from the terminal, it works fine. I don't get any errors.
When I run the same script in Jenkins, it fails and says there's no such file or directory.
The error message from the Jenkins output I get is this:
Traceback (most recent call last):
File "runMTTRScript.py", line 256, in <module>
main()
File "runMTTRScript.py", line 252, in main
startTest(start, end, impalaHost)
File "runMTTRScript.py", line 72, in startTest
getResults(start, end)
File "runMTTRScript.py", line 111, in getResults
proc1 = subprocess.Popen(cmd, stdout=processlistOut)
File "/glide/lib64/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/glide/lib64/python2.7/subprocess.py", line 1335, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
Here's the code that the error above is complaining about:
with open(JAVAOUT + "_processList" + outfileDate, 'w') as processlistOut, \
open(JAVAOUT + "_innodb" + outfileDate, 'w') as innodbOut:
cmd = ["java", "-cp", "MTTR4hrs-1.0.5-SNAPSHOT-allinone.jar", "com.servicenow.bigdata.MTTR4hrs", "-c", "config.properties", "-m", DBIFILE, "-d", start, end, "-f", "processlist", "-ds", "dbi"]
proc1 = subprocess.Popen(cmd, stdout=processlistOut)
cmd = ["java", "-cp", "MTTR4hrs-1.0.5-SNAPSHOT-allinone.jar", "com.servicenow.bigdata.MTTR4hrs", "-c", "config.properties", "-m", DBIFILE, "-d", start, end, "-f", "engineinnodbstatus", "-ds", "dbi"]
proc2 = subprocess.Popen(cmd, stdout=innodbOut)
Why would it complain that a file is not there from Jenkins but be fine when I run it from cmd line? Could this also be some race condition in python that I'm not aware of too? The "with...open" doesn't open a file fast enough for the Popen to make use of? I'm also open to the fact that it might be some OS problem too (too many open files, something stupid, etc.).

python subprocess Popen hangs

OpenSolaris derivate (NexentaStor), python 2.5.5
I've seen numerous examples and many seem to indicate that the problem is a deadlock. I'm not writing to stdin so I think the problem is that one of the shell commands exits prematurely.
What's executed in Popen is:
ssh <remotehost> "zfs send tank/dataset#snapshot | gzip -9" | gzip -d | zfs recv tank/dataset
In other words, login to a remote host and (send a replication stream of a storage volume, pipe it to gzip) pipe it to zfs recv to write to a local datastore.
I've seen the explanation about buffers but Im definitely not filling up those, and gzip is bailing out prematurely so I think that the process.wait() never gets an exit.
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
process.wait()
if process.returncode == 0:
for line in process.stdout:
stdout_arr.append([line])
return stdout_arr
else:
return False
Here's what happens when I run and interrupt it
# ./zfs_replication.py
gzip: stdout: Broken pipe
^CKilled by signal 2.
Traceback (most recent call last):
File "./zfs_replication.py", line 155, in <module>
Exec(zfsSendRecv(dataset, today), LOCAL)
File "./zfs_replication.py", line 83, in Exec
process.wait()
File "/usr/lib/python2.5/subprocess.py", line 1184, in wait
pid, sts = self._waitpid_no_intr(self.pid, 0)
File "/usr/lib/python2.5/subprocess.py", line 1014, in _waitpid_no_intr
return os.waitpid(pid, options)
KeyboardInterrupt
I also tried to use the Popen.communicat() method but that too hangs if gzip bail out. In this case the last part of my command (zfs recv) exits because the local dataset has been modified so the incremental replication stream will not be applied, so even though that will be fixed there has got to be a way of dealing with gzips broken pipes?
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
stdout, stderr = process.communicate()
if process.returncode == 0:
dosomething()
else:
dosomethingelse()
And when run:
cannot receive incremental stream: destination tank/repl_test has been modified
since most recent snapshot
gzip: stdout: Broken pipe
^CKilled by signal 2.Traceback (most recent call last):
File "./zfs_replication.py", line 154, in <module>
Exec(zfsSendRecv(dataset, today), LOCAL)
File "./zfs_replication.py", line 83, in Exec
stdout, stderr = process.communicate()
File "/usr/lib/python2.5/subprocess.py", line 662, in communicate
stdout = self._fo_read_no_intr(self.stdout)
File "/usr/lib/python2.5/subprocess.py", line 1025, in _fo_read_no_intr
return obj.read()
KeyboardInterrupt

Categories

Resources