Running consecutive commands with python fabric module - python

I am writing an application that interacts with numerous systems, specifically with switches,
i am trying to implement a function that will enable me to retreive logs from a specific switch using Fabric (python)
in a real session to the switch i would need to first run "enable" (and press enter key) and then run "debug generate dump" command.
using fabric.operations.run() i can only issue one command at a time,
using fabric.operations.open_shell() is not an option since i need to parse the output and also close the connection once it finishes.
Can someone assist on this?
THANKS!!
Here is an example of the code:
def getSwitchLog(self, host, port, username, password):
env.host_string = "%s:%s" % (host, port)
env.user = username
env.password = password
command = 'enable \r debug generate dump'
run(command, shell=cli, pty=True, combine_stderr=True, timeout=120)
shell=cli - because the switch does not run bash and 'cli' is the appropriate value in this case
\r should have sent "enter" key essentially sending 1. enable 2. enter 3. debug generate dump
this method works if i switch run with open_shell
but it seems run ignores \r
I was able to achieve what i need using:
command = 'sshpass -p admin ssh admin#switchIP cli \"enable\" \"show version\"'
fabric.api.local(command, capture=True, shell=None)
however this method is not robust as fabric.api.run() and also requires the running node to have sshpass installed
This is an example of the output from the switch CLI as the commands entered interactively (keyboard) without fabric
[standalone: master] > enable
[standalone: master] # debug generate dump
[standalone: master] # debug generate dump Generated dump sysdump-SX6036-1-20130630-104051.tgz
[standalone: master] #
thanks.

So working with a session state isn't something Fabric does. Every call is a new session. There are some other project that try and get around this, one being fexpect, but since you're attempting to query a switch I don't believe that will work. Since fexpect (last i knew) uploads a expect script which it then runs to the remote machine.
What you might have better luck with though is pxssh from the pexpect module. It allows ssh+expect like work simple enough. It's outside Fabric, but more likely to work for you right out of the gate I think.

work with robot-framework based on 'paramiko'. It has more simple API (write/read_unitl/read_all) to interact with your switch shell.
http://robotframework.org/SSHLibrary/latest/SSHLibrary.html

Related

How can you Interact and log in to the physical steam client using Python?

In summary; I want to be able to interact with a non-logged in steam client and log it in, and I dont want to use some type of Windows only interaction. I don't mean the SteamCLI, or logging into steam using the steam library for python. I mean directly interacting with the steam client in some way and physically logging in.
When using SteamCLI and other modules I've noticed it just logs you into their session instead of the client session that you get from physically logging into steam.
for example:
from steam.client import SteamClient
x = SteamClient()
x.login(username=, password=)
Doesn't actually log you in, since it is its own client.
I need this because I have made a script that can connect me to servers and figure out who's on it, and it relies on you being logged in to said client.
Is there any modules/libraries that will allow me to do this? and if pywinauto is the one I should use, are there any guides you know of so I can interact with said application in a good way, even on linux.
So here is what I ended up doing:
First you get a dummy display. I used this https://askubuntu.com/questions/453109/add-fake-display-when-no-monitor-is-plugged-in
After that you create the config (https://askubuntu.com/a/463000) and start up the display.
Next I created a systemd unit to launch steam using command line
[Unit]
Description=example systemd service unit file.
[Service]
User=root
Environment=XAUTHORITY=/var/run/lightdm/root/:0
Environment=DISPLAY=:0
ExecStart=/bin/bash /root/start.sh
[Install]
WantedBy=multi-user.target
(you probably shouldn't be using root for this, but its the easiest way I found to always have perms to the XAUTHORITY env, so you can login to the display)
Here is what /root/start.sh looks like
#!/bin/sh
export DISPLAY=:0
export XAUTHORITY=/var/run/lightdm/root/:0
export PATH="$PATH:/usr/games"
steam -login username_goes_here password_goes_here steam_guard_code
Note that you may not need to do the export DISPLAY and export XAUTHORITY twice. I haven't tested it.
This should just work™, it will log in, but it will ask for a steam guard code, so what you should do is attempt to start the service, receive the code through email, then append it after password.
As for doing this in python, as I said originally. you can probably use it to call start.sh, implementing command line arguments for steam code. You need these arguments because I haven't found a way for it to trust the computer you use, so it might be beneficial to implement python IMAP for reading the codes sent to the email which owns the account and using said arguments.
Theoretical Section
Please note this whole section is untested, but intended to answer my original question.
Heres what a python file might look like (very roughly)
def start_steam(code=""):
# Assuming start.sh includes reading variables from cmd line (my example didnt)
proc = subprocess.Popen(["/bin/bash", "start.sh", code])
# This function would probably just interact iwth an IMAP server to retrieve an email sent by steam support with the code. If its not found within a certain range of time, it would theoretically just stop looking
steam_guard_code = do_something_to_find_steam_guard_code_email(timeout=30)
if code:
proc.terminate()
start_steam(code=code)
else:
# proc should continue to run in the background, until restart, that means this script should ALSO run on startup, or until there isnt a PID detected for steam
with open("/var/run/steam.pid", "w") as f:
f.write(process.pid)
exit()
this python script would have to be run by systemd, it would be a forking process, and PIDfile would have to be specified by systemd (I think) so that it can be monitored.
edit: this solution has become even more theoretical, since it is not possible to pass steam guard code to steam -login username password. You would need to disable steam guard.

Connecting to particular port inside a switch using python

Please find the below scenario:
I have one switch IP
For that switch multiple devices connected to multiple port numbers
Manually we will login to the switch with credentials
will give particular port number and click on enter two times
Then it will ask credentials of that particular device
After entering that credentials it will allow to login to that device.
Then we will give some commands.
Please help me on python code for this scenario
Try using pexpect module.
Install pexpect by pip install pexpect
Algorithm
1) Spawn a pexpect child by child = pexpect.spawn('ssh user#switch-ip') or pexpect.spawn('telnet switch-ip') or the protocol you use to login to the switch.
2) Then expect the prompt of the terminal(of the switch) like this child.expect("#")
3) Now you can run commands using child.sendline("your command here")
Do step 3 as many times as you would do manually. Now you can generalize the script by passing values as arguments. And everything will be done automatically.
More about pexpect here.
And here are few examples in that same website.
This should automate your scenario pretty easily.

Python raw_input malfunctions and returns -bash: line 1: <INPUT>: command not found

I have written a script that establishes an SSH tunnel and connects to a database over that tunnel.
Extremely simplified nutshell (obvious parameters and extra logic omitted):
sshTunnelCmd = "ssh -N -p %s -L %s:127.0.0.1:%s -i %s %s#%s" % (
sshport, localport, remoteport, identityfile, user, server
)
args = shlex.split(sshTunnelCmd)
tunnel = subprocess.Popen(args)
time.sleep(2)
con = MySQLdb.connect(host="127.0.0.1", port=localport, user=user, passwd=pw, db=db)
## DO THE STUFF ##
con.close()
tunnel.kill()
The shell-equivalent commands are below, and I have tested both the commands and the script to work in "clean client" conditions, i.e. after a reboot.
ssh -N -p 22 -L 5000:127.0.0.1:3306 user#server
mysql --port 5000 -h 127.0.0.1 -u dbuser -p
SSH login is with keys and in ~/.ssh/config the server is configured as
Host server
Hostname F.Q.D.N
Port 22
User user
ControlMaster auto
ControlPath ~/.ssh/master-%r#%h:%p
ControlPersist 600
IdentityFile ~/.ssh/id_rsa
In the ## DO THE STUFF ## section there is code that tries to connect to the database as regular users. If an exception is raised, it asks for manual input of root credentials, creates the regular users and continue to do the stuff (ordinary queries, all tested manually and working in the python code under clean client conditions).
ruz = raw_input('root user? ')
print (ruz)
rup = raw_input('root password? ')
print (rup)
print ("Root connecting to database.")
try:
cxroot = MySQLdb.connect(host=host, port=port, user=ruz, passwd=rup)
cur = cxroot.cursor()
except MySQLdb.Error, e:
print ("Root failed, sorry.")
print "Error %d: %s" % (e.args[0],e.args[1])
print ("GAME OVER.")
return -1
Under clean client, the first and some subsequent executions work well, including when I try to test the script robustness and remove the user server-side. However, at some point, it hangs in a weird way after the second raw_input in the code block above. Output:
root user? root
root
root password? s3cReTsTr1n9
-bash: line 1: s3cReTsTr1n9: command not found
The only thing I can do at this point is kill the process or hit CTRL+C, which is followed by the following traceback:
^CTraceback (most recent call last):
File "./initdb.py", line 571, in <module>
main()
File "./initdb.py", line 526, in main
connection = connectDB ('127.0.0.1', localport, dbuser, dbpw, db)
File "./initdb.py", line 128, in connectDB
rup = raw_input('root password? ')
KeyboardInterrupt
Another unexpected symptom I noticed is that keyboard input to the terminal window (I am running this in a bash terminal within Xubuntu 14.04LTS) becomes spuriously unresponsive, so I have to close the terminal tab and start a new tab. This clears keyboard input, but not script behaviour.
I have tried to search for a solution but the usual search engines are not helpful in my case, probably because I do not completely understand what is going on. I suspect that keyboard input is somehow redirected to a process, possibly the tunnel subprocess, but I cannot explain why the first raw_input works as expected and the second one does not.
I am also uncomfortable with the way I create the tunnel, so any advice for a more robust tunnel creation is welcome. Specifically, I would like to have more fine grained control over the tunnel creation, rather than waiting an arbitrary two seconds for the tunnel to be established because I have no feedback from that subprocess.
Thanks for sharing your time and expertise.
There's two sections to my answer: how I'd go about diagnosing this, and how I would go about doing this.
To begin with, I'd suggest using the prompt that's failing as an opportunity to do some exploration.
There's two approaches you could take here:
Just enter hostname (or whatever) to find out where it's running
Enter bash, or if the remote end has an X server add -X to your ssh command then type a terminal program (xterm, gnome-terminal, etc). In your new shell you can poke around to see what's going on.
If you determine it's running on the client side you could diagnose it with strace:
strace -f -o blah.log yourscript.py
... where you'd enter an easy to search string for the password then search for that in blah.log. Because of the -f flag it will print the PID of the process that attempted to execute it; backtracking from there you'll probably find that PID started with a fork from another PID. That PID is what tried to execute it, so you should be able to investigate from there.
As for how I'd do this: I'm still fairly new to python so I would've been inclined to use perl or expect. Down the perl path you might look at:
Net::SSH::Tunnel; this is probably the first one I'd look at using.
Use open or open3 then do something hacky like:
wait for stdout on the process to have text available; you'd have to get rid of -N for that, and you'd be at the mercy of remote auto-logout.
One of the various responses to ssh-check-if-a-tunnel-is-alive
Net::SSH::Expect (eg this post, though I didn't look at his implementation so you'd have to make your own choice on that). This or the "real" expect are probably overkill but you could find a way I'm sure.
Although ruby has a gem like perl's Net::SSH::Tunnel, I don't see a pip for python. This question and this one both discuss it and they seem to indicate you're limited to either starting it as a sub-process or using paramiko.
Are you free to configure the server as you like?
Then try a vpn connection instead of ssh port forwarding. This will easier reconnect without affecting your application, so the tunnel may be more stable.
For the raw_input problem i cannot see why it happens, but maybe the ssh command in a shell interferes with your terminal? If you really want to integrate the ssh tunnel you may want to look at some python modules for handling ssh.
-bash: line 1: s3cReTsTr1n9: command not found
I got same error as -bash command not found even though I was just accepting raw_input() / input(). I tried this in both 2.7 and 3.7 version.
I was trying run a client server program on same mac machine. I had two files server.py and client.py. Everytime in the terminal, I first ran the server.py in background and then ran client.py.
Terminal 1: python server.py &
Terminal 2: python client.py
Each time I got the error "-bash: xxxx: command not found". xxxx here is whatever input I gave.
Finally after spending 5 hours on this I stopped running server.py in background.
Terminal 1: python server.py
Terminal 2: python client.py
And viola it worked. raw_input and input did not give me this error again.
I am not sure if this helps. But this is the only post I found on internet which had exactly the same issue as mine. And thought maybe this would help.

python: interact with shell command using Popen

I need to execute several shell commands using python, but I couldn't resolve one of the problems. When I scp to another machine, usually it prompts and asks whether to add this machine to known host. I want the program to input "yes" automatically, but I couldn't get it to work. My program so far looks like this:
from subprocess import Popen, PIPE, STDOUT
def auto():
user = "abc"
inst_dns = "example.com"
private_key = "sample.sem"
capFile = "/home/ubuntu/*.cap"
temp = "%s#%s:~" %(user, inst_dns)
scp_cmd = ["scp", "-i", private_key, capFile, temp]
print ( "The scp command is: %s" %" ".join(scp_cmd) )
scpExec = Popen(scp_cmd, shell=False, stdin=PIPE, stdout=PIPE)
# this is the place I tried to write "yes"
# but doesn't work
scpExec.stdin.write("yes\n")
scpExec.stdin.flush()
while True:
output = scpExec.stdout.readline()
print ("output: %s" %output)
if output == "":
break
If I run this program, it still prompt and ask for input. How can I response to the prompt automatically? Thanks.
You're being prompted to add the host key to your know hosts file because ssh is configured for StrictHostKeyChecking. From the man page:
StrictHostKeyChecking
If this flag is set to “yes”, ssh(1) will never automatically add host keys to the ~/.ssh/known_hosts
file, and refuses to connect to hosts whose host key has changed. This provides maximum protection
against trojan horse attacks, though it can be annoying when the /etc/ssh/ssh_known_hosts file is
poorly maintained or when connections to new hosts are frequently made. This option forces the user
to manually add all new hosts. If this flag is set to “no”, ssh will automatically add new host keys
to the user known hosts files. If this flag is set to “ask”, new host keys will be added to the user
known host files only after the user has confirmed that is what they really want to do, and ssh will
You can set StrictHostKeyChecking to "no" if you want ssh/scp to automatically accept new keys without prompting. On the command line:
scp -o StrictHostKeyChecking=no ...
You can also enable batch mode:
BatchMode
If set to “yes”, passphrase/password querying will be disabled. This option is useful in scripts and
other batch jobs where no user is present to supply the password. The argument must be “yes” or
“no”. The default is “no”.
With BatchMode=yes, ssh/scp will fail instead of prompting (which is often an improvement for scripts).
Best way I know to avoid being asked about fingerprint matches is to pre-populate the relevant keys in .ssh/known_hosts. In most cases, you really should already know what the remote machines' public keys are, and it is straightforward to put them in a known_hosts that ssh can find.
In the few cases where you don't, and can't, know the remote public key, then the most correct solution depends on why you don't know. If, say, you're writing software that needs to be run on arbitrary user boxes and may need to ssh on the user's behalf to other arbitrary boxes, it may be best for your software to run ssh-keyscan on its own to acquire the ostensible remote public key, let the user approve or reject it explicitly if at all possible, and if approved, append the key to known_hosts and then invoke ssh.

Python telnetlib determining different login prompts

I've written a python script that can log into multiple devices sequentially by reading a csv file listing the different IP addresses. From there it outputs a file for each device with the content from a few commands that are passed to the devices via the script. So I've come pretty far. A problem I'm running into is that sometimes the script hangs. And that is because some devices have different software revisions and do not support certain commands that are being passed to them. The difference I'm focusing on is the prompt after log in. For example, logging into device type A has a command prompt of xyz#. Device type B has a command type abc:. It's the same manufacturer, just a different model and/or software rev. Depending on the command prompt I know the commands I can run on that device without the script hanging up. So what I need to be able to do is after a successful login, depending on the command prompt I get run a set of specific commands.
I can post some of my code if that would help but what I'm really looking to find out is if this is even possible. And if so, so pointers. A few suggestions on what I might try. After working with Python for a few months I know there has to be a way to do this. I usually don't post because I can work through others' posts and develop a working solution. But I've been working on this a bit and haven't been able to piece it together so looking I'm for an assist.
-Shane
EDIT
At this point I'm still unable to write code the would determine the command prompt. Well at least while the telnet session is up. I can telnet in, run some commands and close the session. I can then write the results to a file. And from there read the file to determine the prompt. But ideally I'd like to be able to open a telnet session, run a command to determine the prompt while the session is still open, read it while the session is up and then based on the prompt run specific commands.
The issue seems to be with not being able to read any command output while the telnet session is still up. The session has to close and then write all output to a file. Then read the file to determine the command prompt, determine which commands to run based on the prompt, then open a new telnet session and run those commands.
Should I accept the fact that I have to close the telnet session, write the data to a file, read it to determine prompt and then loop back through the login part of the script again? Or am I missing something? Not sure if I'm bring clear in my description.
I would implement the commands using a common interface and then use a dictionary to retrieve them when I know what system I am connected to:
# command set for system xyz#
def copy1(src, dest):
pass
def list1():
pass
# command set for system abc:
def copy2(src, dest):
pass
def list2():
pass
cmdDict = {
# prompt command set
'xyz#': [copy1, list1],
'abc:' [copy2, list2],
}
...
# guess the right commands from the prompt we have read
copyCommand = cmdDict[detected_prompt][0]
listCommand = cmdDict[detected_prompt][1]
...
# proceed normally
listCommand()
copyCommand(f1, g1)
copyCommand(f2, g2)

Categories

Resources