How do I execute Cassandra CLI commands from a Python script? - python

I have a python script that I want to use to make remote calls on a server, connect to Cassandra CLI, and execute commands to create keyspaces. One of the attempts that I made was something to this effect:
connect="cassandra-cli -host localhost -port 1960;"
create_keyspace="CREATE KEYSPACE someguy;"
exit="exit;"
final = Popen("{}; {}; {}".format(connect, create_keyspace, exit), shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True)
stdout, nothing = final.communicate()
Looking through various solutions, I'm not finding what I need. For example, the above code is throwing a "/bin/sh: 1: CREATE: not found", which I think means that it's not executing the CREATE statement on the CLI command line.
Any/all help would be GREATLY appreciated! Thank you!

try this out. I don't have cassandra-cli installed on my machine, so I couldn't test it myself.
from subprocess import check_output
from tempfile import NamedTemporaryFile
CASSANDRA_CMD = 'cassandra-cli -host localhost -port 1960 -f '
def cassandra(commands):
with NamedTemporaryFile() as f:
f.write(';\n'.join(commands))
f.flush()
return check_output(CASSANDRA_CMD + f.name, shell=True)
cassandra(['CREATE KEYSPACE someguy', 'exit'])
As you mentioned in the comment below pycassa a Python client for Cassandra cannot be used since it doesn't seem to support create statements.

Related

Connect to gce instance and run command

I have a simple request. I want to connect to an already existing google compute engine instance, run a command, and close the connection.
I have used the great sample code here for instance creation and deletion.
Additionally, I have a startup script running which works perfectly.
Now I am reading this article to use paramiko to connect to my instance. This may or may not be the best thing to do, so please correct me if I am going down the wrong path.
I have the following sample code:
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(
paramiko.AutoAddPolicy())
ssh.connect('35.***.***.**',username='user',password='pass')
stdin, stdout, stderr = ssh.exec_command("sudo su -")
stdin, stdout, stderr = ssh.exec_command("ls -l")
stdout.readlines()
Now - I am not sure which username or password I am supposed to use.
When I run this code, I do not get the list of files and directories in my root as I want, but I do get a list of files and directories in the default user account's home - so it is connecting.
My goal is to connect to a gce instance, run a command, and that is it! For some reason it is trickier than I anticipated. Am I doing something wrong here?
If you are facing a similar use case you can explore gcloud ssh. It worked for me, but I cannot comment if this is best practice or not.
My solution here was something like the following:
import subprocess
def check_for_completion(instance_name = ""):
cmd = "gcloud compute ssh %s --zone=us-east1-b --command=\"sudo -S -i -u root -p '' ls /root/temp/ \""%(instance_name)
try:
res = subprocess.check_output(cmd, shell=True)
items = str(res).split('\n')
return {'response':items,'complete':False}
except:
return {'response':None,'complete':True}

open a putty window and run ssh commands - Python

I am new to python. I need to login to a server daily (Desktop -> 1.32 -> 0.20 -> 3.26). For this I need to open putty and using ssh connection i am logging in. To do all this I want to write a script using python.
By using google I thought subprocess.Popen will do that. But Its not working fine.
1st trail:
import subprocess
pid = subprocess.Popen("putty.exe user#xxx.xx.x.32 -pw password").pid
Its working fine (Opening window logging into .32). But cant able to give input. I came to know that to give input for the same process we need to use pipes.
2nd trail:
from subprocess import Popen, PIPE, STDOUT
p = Popen("putty.exe user#xxx.xx.x.32 -pw password", stdout=PIPE, stdin=PIPE, stderr=STDOUT)
grep_stdout = p.communicate(input=b'ssh xx.xx.x.20\n')[0]
print(grep_stdout.decode())
by using this i cant login for the first server also. After logging in to all servers I need the terminal as alive. how to do this???
Edit
I need to do this in a new putty window. After logging in dont close the window. I have some manual work to do.
use powershell to call putty in order to open a new window
from subprocess import Popen
Popen("powershell putty.exe user#host -pw mypassword")
Use paramiko library python
Establish a SSH connection using -
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostname,username, password)
Check the status if connection is alive using -
status = ssh.get_transport().is_active()
#returns True if connection is alive/active
ssh.exec_command() is basically a single session. Use exec_command(command1;command2) to execute multiple commands in one session
Also, you can use this to execute multiple commands in single session
channel = ssh.invoke_shell()
stdin = channel.makefile('wb')
stdout = channel.makefile('rb')
stdin.write('''
Command 1
Command 2
''')
print stdout.read()
There is a SSHv2 protocol implementation for python: http://www.paramiko.org/. You can easily install it with pip:
pip install paramiko
Then you can create ssh client, connect to your host and execute commands:
import paramiko
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect('hostname', username='login', password='pwd')
stdin, stdout, stderr = ssh_client.exec_command('command')
I created a bat file on windows, which references putty and putty session-specific info. This bat file can run by itself on windows. To call from python, I used the subprocess.run() -- python 3.5+.
Example of bat file named putty.bat:
start c:\app\PuTTy\putty.exe -load 192.168.1.230-node1-logs -l <logon user> -pw <logon user password for putty session>
Breaking down the bat file:
It begins with window's command "start".
c:\app\PuTTy\putty.exe --> is the putty directory on Windows containing putty.exe.
-load --> tells putty to load a putty profile. The profile is the name you see on the putty client, under "Saved Sessions".
192.168.1.230-node1-logs --> my putty session specific profile.
-l for logon --> followed by the putty logon user.
-pw is the logon password --> followed by the putty logon password.
That concludes the contents of "putty.bat".
From within python, is used the subprocess.run() command.
Example:
import subprocess
...
...
try:
process = subprocess.run(["putty.bat"], check=True, stdout=subprocess.PIPE, universal_newlines=True)
print(process.stdout)
except Exception as e:
print("subprocess call error in open putty command")
print(str(e))
I hope you find this helpful

Subprocess on remote server

I am using this code for executing command on remote server.
import subprocess
import sys
COMMAND="ls"
ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
if result == []:
error = ssh.stderr.readlines()
print >>sys.stderr, "ERROR: %s" % error
else:
print result
When I try to execute this script, I get prompt for password. Is there any way I could avoid it, for example, can I enter password in script somehow? Also, password should be encrypted somehow so that people who have access to the script cannot see it.
Why make it so complicated? Here's what I suggest:
1) Create a ssh config section in your ~/.ssh/config file:
Host myserver
HostName 50.50.50.12 (fill in with your server's ip)
Port xxxx (optional)
User me (your username for server)
2) If you have generated your ssh keypair do it now (with ssh-keygen). Then upload with:
$ ssh-copy-id myserver
3) Now you can use subprocess with ssh. For example, to capture output, I call:
result = subprocess.check_output(['ssh', 'myserver', 'cat', 'somefile'])
Simple, robust, and the only time a password is needed is when you copy the public key to the server.
BTW, you code will probably work just fine as well using these steps.
One way is to create a public key, put it on the server, and do ssh -i /path/to/pub/key user#host or use paramiko like this:
import paramiko
import getpass
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
p = getpass.getpass()
ssh.connect('hostname', username='user', password=p)
stdin, stdout, stderr = ssh.exec_command('ls')
print stdout.readlines()
ssh.close()
You should use pexpect or paramiko to connect to remote machine,then spawn a child ,and then run subprocess to achieve what you want.
Here's what I did when encountering this issue before:
Set up your ssh keys for access to the server.
Set up an alias for the server you're accessing. Below I'll call it remote_server.
Put the following two lines at the end of ~/.bash_profile.
eval $(ssh-agent -s)
ssh-add
Now every time you start your shell, you will be prompted for a passphrase. By entering it, you will authenticate your ssh keys and put them 'in hand' at the start of your bash session. For the remainder of your session you will be able to run commands like
ssh remote_server ls
without being prompted for a passphrase. Here ls will run on the remote server and return the results to you. Likewise your python script should run without password prompt interruption if you execute it from the shell.
You'll also be able to ssh to the server just by typing ssh remote_server without having to enter your username or password every time.
The upside to doing it this way is that you should be doing this anyway to avoid password annoyances and remembering funky server names :) Also you don't have to worry about having passwords saved anywhere in your script. The only potential downside is that if you want to share the python script with others, they'll have to do this configuring as well (which they should anyway).
You don't really need something like pexpect to handle this. SSH keys already provide a very good and secure solution to this sort of issue.
The simplest way to get the results you want would probably be to generate an ssh key and place it in the .ssh folder of your device. I believe github has a pretty good guide to doing that, if you look into it. Once you set up the keys correctly on both systems, you won't actually have to add a single line to your code. When you don't specify a password it will automatically use the key to authenticate you.
While subprocess.Popen might work for wrapping ssh access, this is not the preferred way to do so.
I recommend using paramiko.
import paramiko
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(server, username=user,password=password)
...
ssh_client.close()
And If you want to simulate a terminal, as if a user was typing:
chan=ssh_client.invoke_shell()
def exec_cmd(cmd):
"""Gets ssh command(s), execute them, and returns the output"""
prompt='bash $' # the command line prompt in the ssh terminal
buff=''
chan.send(str(cmd)+'\n')
while not chan.recv_ready():
time.sleep(1)
while not buff.endswith(prompt):
buff+=ssh_client.chan.recv(1024)
return buff[:len(prompt)]
Example usage: exec_cmd('pwd')
If you don't know the prompt in advance, you can set it with:
chan.send('PS1="python-ssh:"\n')
You could use following.
import subprocess
import sys
COMMAND="ls"
ssh = subprocess.Popen("powershell putty.exe user#HOST -pw "password", stdout=PIPE, stdin=PIPE, stderr=STDOUT)
result = ssh.stdout.readlines()
if result == []:
error = ssh.stderr.readlines()
print >>sys.stderr, "ERROR: %s" % error
else:
print result

How to create a SSH tunnel using Python and Paramiko?

I need to create tunneling to read information from a database. I use Paramiko, but I have not worked with tunneling yet. Please provide an example of a simple code that creates and closes a tunnel.
At work we usually create ssh tunnels forwarding ports. The way we do that is, by using the standard command ssh -L port:addr:port addr with subprocess running in a separate thread.
I found this useful link: https://github.com/paramiko/paramiko/blob/master/demos/forward.py with an example of doing port forwarding with paramiko.
I used sshtunnel for my projects. Example of the forwarding remote local MySQL port to the host local port:
pip install sshtunnel
python -m sshtunnel -U root -P password -L :3306 -R 127.0.0.1:3306 -p 2222 localhost
Even though this does not use paramiko, I believe it's a very clean solution to implement (similar to #dario's answer but without managing the thread in python).
There's this little-mentioned feature in openssh client that allows us to control a ssh process through a unix socket, quoting man ssh:
-M Places the ssh client into “master” mode for connection sharing. Multiple -M options places ssh
into “master” mode with confirmation required before slave connections are accepted. Refer to the
description of ControlMaster in ssh_config(5) for details.
-S ctl_path
Specifies the location of a control socket for connection sharing, or the string “none” to disable
connection sharing. Refer to the description of ControlPath and ControlMaster in ssh_config(5)
for details.
So you can start background process of ssh (with -Nf) and then check (or terminate) it with a another ssh call.
I use this in a project that requires a reverse tunnel to be established
from subprocess import call, STDOUT
import os
DEVNULL = open(os.devnull, 'wb')
CONFIG = dict(
SSH_SERVER='ssh.server.com',
SSH_PORT=2222,
SSH_USER='myuser',
SSH_KEY='/path/to/user.key',
REMOTE_PORT=62222,
UNIX_SOCKET='/tmp/ssh_tunnel.sock',
KNOWN_HOSTS='/path/to/specific_known_host_to_conflicts',
)
def start():
return call(
[
'ssh', CONFIG['SSH_SERVER'],
'-Nfi', CONFIG['SSH_KEY'],
'-MS', CONFIG['UNIX_SOCKET'],
'-o', 'UserKnownHostsFile=%s' % CONFIG['KNOWN_HOSTS'],
'-o', 'ExitOnForwardFailure=yes',
'-p', str(CONFIG['SSH_PORT']),
'-l', CONFIG['SSH_USER'],
'-R', '%d:localhost:22' % CONFIG['REMOTE_PORT']
],
stdout=DEVNULL,
stderr=STDOUT
) == 0
def stop():
return __control_ssh('exit') == 0
def status():
return __control_ssh('check') == 0
def __control_ssh(command):
return call(
['ssh', '-S', CONFIG['UNIX_SOCKET'], '-O', command, 'x'],
stdout=DEVNULL,
stderr=STDOUT
)
-o ExitOnForwardFailure=yes makes sure the ssh command will fail if the tunnel cannot be established, otherwise it will not exit.
Might I suggest trying something like pyngrok to programmatically manage an ngrok tunnel for you? Full disclosure, I am the developer of it. SSH example here, but it's as easy as installing pyngrok:
pip install pyngrok
and using it:
from pyngrok import ngrok
# <NgrokTunnel: "tcp://0.tcp.ngrok.io:12345" -> "localhost:22">
ssh_tunnel = ngrok.connect(22, "tcp")
I used paramiko for some project I had a year ago, here is the part of my code where I connected with another computer/server and executed a simple python file:
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostname='...', username='...', password='...')
stdin, stdout, stderr = ssh.exec_command('python hello.py')
ssh.close()
stdin, stdout and sdterr contain the inputs/outputs of the command you executed.
From here, I think you can make the connection with the database.
Here is some good information about paramiko.

Use subprocess to send a password

I'm attempting to use the python subprocess module to log in to a secure ftp site and then grab a file. However I keep getting hung up on just trying to send the password when it is requested. I so far have the following code:
from subprocess import Popen, PIPE
proc = Popen(['sftp','user#server', 'stop'], stdin=PIPE)
proc.communicate('password')
This still stops at the password prompt. If I enter the password manually it then goes to the ftp site and then enters the password on the command line. I've seen people suggest using pexpect but long story short I need a standard library solution. Is there anyway with subprocess and/or any other stdlib? What am I forgetting above?
Try
proc.stdin.write('yourPassword\n')
proc.stdin.flush()
That should work.
What you describe sounds like stdin=None where the child process inherits the stdin of the parent (your Python program).
Perhaps you should use an expect-like library instead?
For instance Pexpect (example). There are other, similar python libraries as well.
from subprocess import Popen, PIPE
proc = Popen(['sftp','user#server', 'stop'], stdin=PIPE)
proc.communicate(input='password')
Try with input=‘password’ in communicate, that worked for me.
Use Paramiko for SFTP. For anything else, this works:
import subprocess
args = ['command-that-requires-password', '-user', 'me']
proc = subprocess.Popen(args,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
proc.stdin.write('mypassword\n')
proc.stdin.flush()
stdout, stderr = proc.communicate()
print stdout
print stderr
For some reason, I couldn't get any of the standard library answers here to work for me - getting very strange problems with all of them. Someone else here: unable to provide password to a process with subprocess [python] had the same problem, and concluded that ultimately you just have to go with pexpect to be able to send a password.
I wanted to add my final code here to just save the time of anyone having a similar problem, since I wasted so much time on this (Python 3, 2020):
ssh_password = getpass("user's password: ")
ssh_password = (ssh_password + "\n").encode()
scp_command = 'scp xx.xx.xx.xx:/path/to/file.log /local/save/path/'
child = pexpect.spawn(scp_command)
# make output visible for debugging / progress watching
child.logfile = sys.stdout.buffer
i = child.expect([pexpect.TIMEOUT, "password:"])
if i == 0:
print("Got unexpected output: {} {}".format(child.before, child.after))
return
else:
child.sendline(ssh_password)
child.read()
The above code runs an SCP command to pull a file from the remote server onto your local computer - alter the server IP and paths as necessary.
Key things to remember:
Have to have a pexpect.TIMEOUT in the child.expect call
Have to encode to bytes whatever strings you pass in, and have to use the default encode
Write pexpect output to sys.stdout.buffer so that you can actually see what is going on
Have to have a child.read() at the end
I would recommend scrapping the subprocess approach and using the paramiko package for sftp access.
This same problem plagued me for a week. I had to submit a password from user input through subprocess securely because I was trying to avoid introducing a command injection vulnerability. Here is how I solved the problem with a little help from a colleague.
import subprocess
command = ['command', 'option1', '--password']
subprocess.Popen(command, stdin=subprocess.PIPE).wait(timeout=60)
The .wait(timeout=int) was the most important component because it allows the user to feed input to stdin. Otherwise, the timeout is defaulted to 0 and leaves the user no time to enter input, which consequently results in a None or null string. Took me FOREVER to figure this out.
For repeat use-cases where you know you'll have to do this multiple times, you can override the popen function and use it as a private method which I was told by the same programmer is best practice if you anticipate someone else will be interested in maintaining the code later on and you don't want them to mess with it.
def _popen(cmd):
proc_h = subprocess.Popen(cmd, stdin=subprocess.PIPE)
proc_h.wait(timeout=60)
return proc_h.poll() == os.EX_OK
It is important to remove stdout=subprocess.PIPE if the user is going to be prompted for input. Otherwise, the process appears to hang for 60 seconds, and the user doesn't get a prompt, nor do they realize they are expected to give a password. The stdout will naturally go to the shell window and allow the user to pass input to popen().
Also, just to explain why you return proc_h.poll() == os.EX_OK, is that it returns 0 if the command succeeded. This is just c-style best-practice for when you want to return system error codes in the event the function fails, while accounting for the fact that return 0 will be treated as "false" by the interpreter.
This is a pure Python solution using expect - not pexpect.
If on Ubuntu you first need to install expect with:
sudo apt install expect
Python 3.6 or later:
def sftp_rename(from_name, to_name):
sftp_password = 'abigsecret'
sftp_username = 'foo'
destination_hostname = 'some_hostname'
from_name = 'oldfilename.txt'
to_name = 'newfilename.txt'
commands = f"""
spawn sftp -o "StrictHostKeyChecking no"
{sftp_username}#{destination_hostname}
expect "password:"
send "{sftp_password}\r"
expect "sftp>"
send "rename {from_name} {to_name}\r"
expect "sftp>"
send "bye\r"
expect "#"
"""
sp = subprocess.Popen(['expect', '-c', commands], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
since what you want is just grab a file, I am trying to use "sub process" but it is not works for me. So now I am using paramiko, here is my code:
here is one tutorial I found online
Transfer a file from local server to remote server and vice versa using paramiko of python
"https://www.youtube.com/watch?v=dtvV2xKaVjw"
underneath is my code for transfering all the files in one folder from Linux to windows
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostname='11.11.11.1111', username='root', password='********', port=22)
sftp_client = ssh.open_sftp()
source_folder = '/var/ftp/file_pass'
local_folder = 'C:/temp/file_pass'
inbound_files = sftp_client.listdir(source_folder)
print(inbound_files)
for ele in inbound_files:
try:
path_from = source_folder + '/' + ele
path_to = local_folder + '/'+ ele
sftp_client.get(path_from, path_to)
except:
print(ele)
sftp_client.close()
ssh.close()
Python have a built in library called ftplib, that can be used for ftp processes without any hassle. (Assuming the remote server have a ftp service running)
from ftplib import FTP
ftp = FTP('ftp.us.debian.org') # connect to host, default port
ftp.login() # user anonymous, passwd anonymous#
##'230 Login successful.'
ftp.cwd('debian') # change into "debian" directory
##'250 Directory successfully changed.'
ftp.retrlines('LIST')
Otherwise, You can use scp command, which is a command line tool. The problem with the password can be avoided creating password less user for remote host.
import os
os.system('scp remoteuser#remotehost:/remote/location/remotefile.txt /client/location/')
To create a passwordless user in linux systems,
Fallow below Steps. Fallow this SO answer.
> ssh-keyscan remotehost
> known_hosts ssh-keygen -t rsa # ENTER toevery field (One time)
> ssh-copy-id remoteuser#remotehost
The safest way to do this is to prompt for the password beforehand and then pipe it into the command. Prompting for the password will avoid having the password saved anywhere in your code. Here's an example:
from getpass import getpass
from subprocess import Popen, PIPE
password = getpass("Please enter your password: ")
proc = Popen("sftp user#server stop".split(), stdin=PIPE)
# Popen only accepts byte-arrays so you must encode the string
proc.communicate(password.encode())
import subprocess
args = ['command', 'arg1', 'arg2']
proc = subprocess.Popen(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
proc.stdin.write(b'password') ##The b prefix is necessary because it needs a byte type
proc.stdin.flush()
stdout, stderr = proc.communicate()
print(stdout)
print(stderr)
You just forgot the line return (aka user pressing Enter) in your password.
from subprocess import Popen, PIPE
proc = Popen(['sftp','user#server', 'stop'], stdin=PIPE)
proc.communicate('password\n'.encode())
Also .encode() because by default proc.communicate() accept bytes-like object.

Categories

Resources