how to omit "connect: network is unreachable" message - python

i created a script which runs on boot which checks if there's an internet connection on my raspberry pi, and at the same time updates the time (care of ntp) - via os.system().
import datetime, os, socket, subprocess
from time import sleep
dir_path = os.path.dirname(os.path.abspath(__file__))
def internet(host="8.8.8.8"):
result = subprocess.call("ping -c 1 "+host, stdout=open(os.devnull,'w'), shell=True)
if result == 0:
return True
else:
return False
timestr = time.strftime("%Y-%m-%d--%H:%M:%S")
netstatus = internet()
while netstatus == False:
sleep(30)
netstatus = internet()
if netstatus == True:
print "successfully connected! updating time . . . "
os.system("sudo bash "+dir_path+"/updatetime.sh")
print "time updated! time check %s"%datetime.datetime.now()
where updatetime.sh contains the following:
service ntp stop
ntpd -q -g
service ntp start
this script runs at reboot/boot and i'm running this in our workplace, 24/7. also, outputs from scripts like these are saved in a log file. it's working fine, but is there a way how NOT to output connect: Network is unreachable
if there's no internet connection? thanks.
edit
i run this script via a shell script i named launch.sh which runs check_net.py (this script's name), and other preliminary scripts, and i placed launch.sh in my crontab to run on boot/reboot:
#reboot sh /home/pi/launch.sh > /home/pi/logs/cronlog 2>&1
from what i've read in this thread: what does '>/dev/null/ 2>&1' mean, 2 handles stderr where as 1 handles stdout.
i am new to this. I wish to see my stdout - but not the stderrs (in this case, the connect: Network is unreachable messages (only)..
/ogs

As per #shellter 's link suggestion in the comments, i restructured my cron to:
#reboot sh /home/pi/launch.sh 2>&1 > /home/pi/logs/cronlog | grep "connect: Network is unreachable"
alternatively, i also came up of an alternative solution, which involves a different way of checking an internet connection with the use urllib2.urlopen():
def internet_init():
try:
urllib2.urlopen('https://www.google.com', timeout=1)
return True
except urllib2.URLError as err:
return False
either of the two methods above omitted any connect: Network is unreachable error output in my logs.
thanks!
/ogs

Related

Command output is corrupted when executed using Python Paramiko exec_command

I'm a software tester, trying to verify that the log on a remote QNX (a BSD variant) machine will contain the correct entries after specific actions are taken. I am able to list the contents of the directory in which the log resides, and use that information in the command to read (really want to use tail -n XX <file>) the file. So far, I always get a "(No such file or directory)" when trying to read the file.
We are using Froglogic Squish for automated testing, because the Windows UI (that interacts with the server piece on QNX) is built using Qt extensions for standard Windows elements. Squish uses Python 2.7, so I am using Python 2.7.
I am using paramiko for the SSH connection to the QNX server. This has worked great for sending commands to the simulator piece that also runs on the QNX server.
So, here's the code. Some descriptive names have been changed to avoid upsetting my employer.
import sys
import time
import select
sys.path.append(r"C:\Python27\Lib\site-packages")
sys.path.append(r"C:\Python27\Lib\site-packages\pip\_vendor")
import paramiko
# Import SSH configuration variables
ssh_host = 'vvv.xxx.yyy.zzz'
thelog_dir = "/logs/the/"
ssh_user = 'un'
ssh_pw = 'pw'
def execute_Command(fullCmd):
outptLines = []
#
# Try to connect to the host.
# Retry a few times if it fails.
#
i = 1
while True:
try:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ssh_host, 22, ssh_user, ssh_pw)
break
except paramiko.AuthenticationException:
log ("Authentication failed when connecting to %s" % ssh_host)
return 1
except:
log ("Could not SSH to %s, waiting for it to start" % ssh_host)
i += 1
time.sleep(2)
# If we could not connect within time limit
if i == 30:
log ("Could not connect to %s. Giving up" % ssh_host)
return 1
# Send the command (non-blocking?)
stdin, stdout, stderr = ssh.exec_command(fullCmd, get_pty=True)
for line in iter(stdout.readline, ""):
outptLines.append(line)
#
# Disconnect from the host
#
ssh.close()
return outptLines
def get_Latest_Log():
fullCmd = "ls -1 %s | grep the_2" %thelog_dir
files = execute_Command(fullCmd)
theFile = files[-1]
return theFile
def main():
numLines = 20
theLog = get_Latest_Log()
print("\n\nThe latest log is %s\n\n" %theLog)
fullCmd = "cd /logs/the; tail -n 20 /logs/the/%s" %theLog
#fullCmd = "tail -n 20 /logs/the/%s" %theLog
print fullCmd
logLines = execute_Command(fullCmd)
for line in logLines:
print line
if __name__ == "__main__":
# execute only if run as a script
main()
I have tried to read the file using both tail and cat. I have also tried to get and open the file using Paramiko's SFTP client.
In all cases, the response of trying to read the file fails -- despite the fact that listing the contents of the directory works fine. (?!) And BTW, the log file is supposed to be readable by 'world'. Permissions are -rw-rw-r--.
The output I get is:
"C:\Users\xsat086\Documents\paramikoTest>python SSH_THE_MsgChk.py
The latest log is the_20210628_115455_205.log
cd /logs/the; tail -n 20 /logs/the/the_20210628_115455_205.log
(No such file or directory)the/the_20210628_115455_205.log"
The file name is correct. If I copy and paste the tail command into an interactive SSH session with the QNX server, it works fine.
Is it something to do with the 'non-interactive' nature of this method of sending commands? I read that some implementations of SSH are built upon a command that offers a very limited environment. I don't see how that would impact this tail command.
Or am I doing something stupid in this code?
I cannot really explain completely, why you get the results you get.
But in general a corrupted output is a result of enabling and not handling terminal emulation. You enable the terminal emulation using get_pty=True. Remove it. You should not use the terminal emulation, when automating command execution.
Related question:
Is there a simple way to get rid of junk values that come when you SSH using Python's Paramiko library and fetch output from CLI of a remote machine?

Gracefully abort remote Windows command executed over SSH from Windows Python Paramiko script when Ctrl+C is pressed

I have a follow up question that builds off the question I asked here: Run multiple commands in different SSH servers in parallel using Python Paramiko, which was already answered.
Thanks to the answer on the link above, my python script is as follows:
# SSH.py
import paramiko
import argparse
import os
path = "path"
python_script = "worker.py"
# definitions for ssh connection and cluster
ip_list = ['XXX.XXX.XXX.XXX', 'XXX.XXX.XXX.XXX', 'XXX.XXX.XXX.XXX']
port_list = [':XXXX', ':XXXX', ':XXXX']
user_list = ['user', 'user', 'user']
password_list = ['pass', 'pass', 'pass']
node_list = list(map(lambda x: f'-node{x + 1} ', list(range(len(ip_list)))))
cluster = ' '.join([node + ip + port for node, ip, port in zip(node_list, ip_list, port_list)])
# run script on command line of local machine
os.system(f"cd {path} && python {python_script} {cluster} -type worker -index 0 -batch 64 > {path}/logs/'command output'/{ip_list[0]}.log 2>&1")
# loop for IP and password
stdouts = []

clients = []
for i, (ip, user, password) in enumerate(zip(ip_list[1:], user_list[1:], password_list[1:]), 1):
try:
print("Open session in: " + ip + "...")
client = paramiko.SSHClient()
client.connect(ip, user, password)
except paramiko.SSHException:
print("Connection Failed")
quit()
try:
path = f"C:/Users/{user}/Desktop/temp-ines"
stdin, stdout, stderr = ssh.exec_command(
f"cd {path} && python {python_script} {cluster} -type worker -index {i} -batch 64>"

 f"C:/Users/{user}/Desktop/{ip}.log 2>&1 &"
)

clients.append(ssh)
stdouts.append(stdout)
except paramiko.SSHException:
print("Cannot run file. Continue with other IPs in list...")
client.close()
continue
# Wait for commands to complete
for i in range(len(stdouts)):
print("hello")
stdouts[i].read()
print("hello1")
clients[i].close()
print('hello2")
print("\n\n***********************End execution***********************\n\n")
This script, which is run locally, is able to SSH into the servers and run the command (i.e., run a python script called worker.py and log the command output to a log file). I.e., it is able to go through the first for loop with no issues.
My issue is related to the second for loop. Please see the print statements I added in the second for loop to be clear. When I run SSH.py locally, this is what I observe:
As you can see, I ssh into each of the servers and then stay at reading the command output of the first server I ssh over to. The worker.py script can take 30 mins or so to complete and the command output is the same on each server -- so it will take 30 mins to read the command output of the first server, then close the SSH connection of the first server, take a couple seconds to read the command output of the second server (as it is the same as the first one and would already be entirely printed), close its SSH connection, and so on. Please see below some of the command line output, if this helps.
Now, my question is, what if I don't want to wait until the worker.py script finishes, i.e., those entire 30 mins? I cannot/do not know how to raise a KeyboardInterrupt exception. What I have tried is quitting the local SSH.py script. However, as you can see from the print statements, this will not close the SSH connections although the training, and thus the log files, will stop logging info. In addition, after I quit the local SSH.py script, if I try to delete any of the log files, I get an error saying "cannot delete file because it is being used in cmd.exe" -- this only happens sometimes and I believe it is because of not closing the SSH connections?
First run in python console:
It hangs: Local python and log file running and saving but no print statements and no python and log file being run/saved in servers.
I run it again so second process starts:
Now, the first process doesn't hang anymore (python running and log files being saved in server). And can close this second run/process. It is like the second run/process helps with the hang of the first run/process.
If I were to run python SSH.py in the terminal it would just hang.
This was not happening before.
If you know that SSHClient.close cleanly close the connection and abort the remote command, call it on response to KeyboardInterrupt.
For this you cannot use the simple solution with stdout.read, as it blocks and prevents handling of the Ctrl+C on Windows.
Use the waiting code from my answer to Run multiple commands in different SSH servers in parallel using Python Paramiko (the while any(x is not None for x in stdouts): snippet).
And wrap it to try:...except (KeyboardInterrupt):.
try:
while any(x is not None for x in stdouts):
for i in range(len(stdouts)):
stdout = stdouts[i]
if stdout is not None:
channel = stdout.channel
# To prevent losing output at the end, first test for exit,
# then for output
exited = channel.exit_status_ready()
while channel.recv_ready():
s = channel.recv(1024).decode('utf8')
print(f"#{i} stdout: {s}")
while channel.recv_stderr_ready():
s = channel.recv_stderr(1024).decode('utf8')
print(f"#{i} stderr: {s}")
if exited:
print(f"#{i} done")
clients[i].close()
stdouts[i] = None
time.sleep(0.1)
except (KeyboardInterrupt):
print("Aborting")
for i in range(len(clients)):
print(f"#{i} closing")
clients[i].close()
If you do not need to separate the stdout and stderr, you can greatly simplify the code by using Channel.set_combine_stderr. See Paramiko ssh die/hang with big output.

paramiko equivalent of "cat File.gz | ssh addres script.sh" in python 3.7

Command i'm trying to run using paramiko in python 3.7:
Windows:
type file.ext4.gz | ssh user#address sudo update.sh
Mac:
cat file.ext4.gz | ssh user#address sudo update.sh
From the cmd / terminals and from .bat / .sh this works, after entering the password. I've been working on a simple python gui (PysimpleGui) to allow the user to fo this, but without the need to enter the password (this is saved from initial connection).
I've tried:
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(config["IP_ADDRESS"], username=config["USERNAME"], password=config["PASSWORD"], timeout=5)
a = client.open_sftp()
a.put(file_location, "sh update.sh", callback=sent)
While this works to send the file, it doesn't run it and gives the error:
OSError: Failure
I don't want to do this in subprocess, as this tool is to prevent the use of terminal for the "end user"
I've been beating my head against this for 2 days now. Thank you.
EDIT:
Here is the STDIO Code:
def send_ssh(value, input=None):
if input:
transport = client.get_transport()
channel = transport.open_session()
channel.exec_command(value)
with open(input, "rb") as file:
for chunk in iter(functools.partial(file.read, read_size), b''):
if channel.send_ready():
channel.sendall(chunk)
if channel.recv_ready():
print(channel.recv(1024).decode().strip())
if channel.recv_stderr_ready():
print(channel.recv_stderr(1024).decode().strip())
while not channel.exit_status_ready():
if channel.recv_ready():
print(channel.recv(1024).decode().strip())
if channel.recv_stderr_ready():
print(channel.recv_stderr(1024).decode().strip())
else:
w, r, e = client.exec_command(value, get_pty=True)
error = e.read().strip().decode()
if error != "":
return error
else:
return r.read().strip().decode()
Once the file is cat to the script it's the verified by the script. I worked around this by just using SFTP to send the file and running my
cat file | sudo script.sh
this works, but does require that i transfer a 600mb file (thankfully always over a local connection (LAN)) each time. The above code does transfer the file, but it doesn't complete. If i just try sending it via for line in file: i'll corrupt.
Keeping things simpler, below we're using threading to allow synchronous APIs to be used rather than needing to write explicit asynchronous code:
import shutil
client = SSHClient()
client.load_system_host_keys()
client.connect('user#address')
# here's the important part: we're using the file handles returned by exec_command()
update_stdin, update_stdout, update_stderr = client.exec_command('sudo update.sh')
# copy stdout and stderr from the remote thread to our own process's stdout and stderr
t_out = Thread(target=shutil.copyfileobj, args=[update_stdout, sys.stdout]); t_out.start()
t_err = Thread(target=shutil.copyfileobj, args=[update_stderr, sys.stderr]); t_err.start()
# write your local file to the remote stdin, in the foreground: we don't exit until done.
shutil.copyfileobj(open('file.ext4.gz', 'r'), update_stdin)
update_stdin.close()
# optional, but let's be graceful: wait for the threads to exit, and collect exit status
t_out.join(); t_err.join()
result = stdout.channel.recv_exit_status()
print(f"Remote process exited with status {result}")

Python run mutiple ssh commands in the same session

My goal is to connect to SSH with python and authenticate which i can do with Paramiko or Fabric. But i would like to keep the session open after each execution and read the input/output. With paramiko i can only run 1 command before the session is closed and i am asked to authenticate again and the session hangs. And since fabric is using the paramiko library its giving me the same issue. For example if my directory structure is like this
-home
--myfolder1
--myfolder2
I would like to execute the below commands without having to re-authenticate because the sessions closes.
(make connection)
run cmd: 'pwd'
output: /home
run cmd: 'cd myfolder2'
run cmd: 'pwd'
output: /home/myfolder2
Is this possible with any module that is out there right now? Could it be made from scratch with native python? And also is this just not possible...?
Edit Added code. Without the new open_session it closes and i cannot run any command. After running the first command with this i will be prompted again to authenticate and it creates an infinite loop.
Edit2 If it closes after each command then there is no way this will work at all correct?
edit3 If i run this on a different server and exec_command with the paramikio.SSHClient it wont ask me to reauthenticate but if i 'cd somedir' and then 'pwd' it will output that i am back in the root directory of where i created.
class connect:
newconnection = ''
def __init__(self,username,password):
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
ssh.connect('someserver', username=username,password=password,port=22,timeout=5)
except:
print "Count not connect"
sys.exit()
self.newconnection = ssh
def con(self):
return self.newconnection
#This will create the connection
sshconnection = connect('someuser','somepassword').con()
while True:
cmd = raw_input("Command to run: ")
if cmd == "":
break
try:
transport = sshconnection.get_transport()
transport.set_keepalive(999999)
chan = transport.open_session()
chan.settimeout(3)
chan.setblocking(0)
except:
print "Failed to open a channel"
chan.get_exception()
sys.exit()
print "running '%s'" % cmd
stdout_data = []
stderr_data = []
pprint.pprint(chan)
nbytes = 4096
chan.settimeout(5)
chan.get_pty()
chan.exec_command(cmd)
while True:
print "Inside loop " , chan.exit_status_ready()
time.sleep(1.2)
if chan.recv_ready():
print "First if"
stdout_data.append(chan.recv(nbytes))
if chan.recv_stderr_ready():
print "Recv Ready"
stderr_data.append(chan.recv_stderr(nbytes))
if chan.exit_status_ready():
print "Breaking"
break
print 'exit status: ', chan.recv_exit_status()
print ''.join(stdout_data)
This is possible by using the normal modules when you can concatenate the commands into one. Try
pwd ; cd myfolder2 ; pwd
as command. This should work but quickly becomes tedious when you have more complex commands which need arguments and horrible when the arguments contain spaces. The next step then is to copy a script with all the commands to the remote side and tell ssh to execute said script.
Another problem of this approach is that SSH doesn't return until all commands have executed.
Alternatively, you could build a "command server", i.e. a simple TCP server that listens for incoming connections and executes commands sent to it. It's pretty simple to write but also pretty insecure. Again, the solution is to turn the server into a (Python) script which reads commands from stdin and start that script remotely via SSH and then send commands.

Remote executing of program via xterm run using paramiko python ssh library

Flow of the program is:
Connect to OpenSSH server on Linux machine using Paramiko library
Open X11 session
Run xterm executable
Run some other program (e.g. Firefox) by typing executable name in the terminal and running it.
I would be grateful if someone can explain how to cause some executable to run in a terminal which was open by using the following code and provide sample source code (source):
import select
import sys
import paramiko
import Xlib.support.connect as xlib_connect
import os
import socket
import subprocess
# run xming
XmingProc = subprocess.Popen("C:/Program Files (x86)/Xming/Xming.exe :0 -clipboard -multiwindow")
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(SSHServerIP, SSHServerPort, username=user, password=pwd)
transport = ssh_client.get_transport()
channelOppositeEdges = {}
local_x11_display = xlib_connect.get_display(os.environ['DISPLAY'])
inputSockets = []
def x11_handler(channel, (src_addr, src_port)):
local_x11_socket = xlib_connect.get_socket(*local_x11_display[:3])
inputSockets.append(local_x11_socket)
inputSockets.append(channel)
channelOppositeEdges[local_x11_socket.fileno()] = channel
channelOppositeEdges[channel.fileno()] = local_x11_socket
transport._queue_incoming_channel(channel)
session = transport.open_session()
inputSockets.append(session)
session.request_x11(handler = x11_handler)
session.exec_command('xterm')
transport.accept()
while not session.exit_status_ready():
readable, writable, exceptional = select.select(inputSockets,[],[])
if len(transport.server_accepts) > 0:
transport.accept()
for sock in readable:
if sock is session:
while session.recv_ready():
sys.stdout.write(session.recv(4096))
while session.recv_stderr_ready():
sys.stderr.write(session.recv_stderr(4096))
else:
try:
data = sock.recv(4096)
counterPartSocket = channelOppositeEdges[sock.fileno()]
counterPartSocket.sendall(data)
except socket.error:
inputSockets.remove(sock)
inputSockets.remove(counterPartSocket)
del channelOppositeEdges[sock.fileno()]
del channelOppositeEdges[counterPartSocket.fileno()]
sock.close()
counterPartSocket.close()
print 'Exit status:', session.recv_exit_status()
while session.recv_ready():
sys.stdout.write(session.recv(4096))
while session.recv_stderr_ready():
sys.stdout.write(session.recv_stderr(4096))
session.close()
XmingProc.terminate()
XmingProc.wait()
I was thinking about running the program in child thread, while the thread running the xterm is waiting for the child to terminate.
Well, this is a bit of a hack, but hey.
What you can do on the remote end is the following: Inside the xterm, you run netcat, listen to any data coming in on some port, and pipe whatever you get into bash. It's not quite the same as typing it into xterm direclty, but it's almost as good as typing it into bash directly, so I hope it'll get you a bit closer to your goal. If you really want to interact with xterm directly, you might want to read this.
For example:
terminal 1:
% nc -l 3333 | bash
terminal 2 (type echo hi here):
% nc localhost 3333
echo hi
Now you should see hi pop out of the first terminal. Now try it with xterm&. It worked for me.
Here's how you can automate this in Python. You may want to add some code that enables the server to tell the client when it's ready, rather than using the silly time.sleeps.
import select
import sys
import paramiko
import Xlib.support.connect as xlib_connect
import os
import socket
import subprocess
# for connecting to netcat running remotely
from multiprocessing import Process
import time
# data
import getpass
SSHServerPort=22
SSHServerIP = "localhost"
# get username/password interactively, or use some other method..
user = getpass.getuser()
pwd = getpass.getpass("enter pw for '" + user + "': ")
NETCAT_PORT = 3333
FIREFOX_CMD="/path/to/firefox &"
#FIREFOX_CMD="xclock&"#or this :)
def run_stuff_in_xterm():
time.sleep(5)
s = socket.socket(socket.AF_INET6 if ":" in SSHServerIP else socket.AF_INET, socket.SOCK_STREAM)
s.connect((SSHServerIP, NETCAT_PORT))
s.send("echo \"Hello there! Are you watching?\"\n")
s.send(FIREFOX_CMD + "\n")
time.sleep(30)
s.send("echo bye bye\n")
time.sleep(2)
s.close()
# run xming
XmingProc = subprocess.Popen("C:/Program Files (x86)/Xming/Xming.exe :0 -clipboard -multiwindow")
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(SSHServerIP, SSHServerPort, username=user, password=pwd)
transport = ssh_client.get_transport()
channelOppositeEdges = {}
local_x11_display = xlib_connect.get_display(os.environ['DISPLAY'])
inputSockets = []
def x11_handler(channel, (src_addr, src_port)):
local_x11_socket = xlib_connect.get_socket(*local_x11_display[:3])
inputSockets.append(local_x11_socket)
inputSockets.append(channel)
channelOppositeEdges[local_x11_socket.fileno()] = channel
channelOppositeEdges[channel.fileno()] = local_x11_socket
transport._queue_incoming_channel(channel)
session = transport.open_session()
inputSockets.append(session)
session.request_x11(handler = x11_handler)
session.exec_command("xterm -e \"nc -l 0.0.0.0 %d | /bin/bash\"" % NETCAT_PORT)
p = Process(target=run_stuff_in_xterm)
transport.accept()
p.start()
while not session.exit_status_ready():
readable, writable, exceptional = select.select(inputSockets,[],[])
if len(transport.server_accepts) > 0:
transport.accept()
for sock in readable:
if sock is session:
while session.recv_ready():
sys.stdout.write(session.recv(4096))
while session.recv_stderr_ready():
sys.stderr.write(session.recv_stderr(4096))
else:
try:
data = sock.recv(4096)
counterPartSocket = channelOppositeEdges[sock.fileno()]
counterPartSocket.sendall(data)
except socket.error:
inputSockets.remove(sock)
inputSockets.remove(counterPartSocket)
del channelOppositeEdges[sock.fileno()]
del channelOppositeEdges[counterPartSocket.fileno()]
sock.close()
counterPartSocket.close()
p.join()
print 'Exit status:', session.recv_exit_status()
while session.recv_ready():
sys.stdout.write(session.recv(4096))
while session.recv_stderr_ready():
sys.stdout.write(session.recv_stderr(4096))
session.close()
XmingProc.terminate()
XmingProc.wait()
I tested this on a Mac, so I commented out the XmingProc bits and used /Applications/Firefox.app/Contents/MacOS/firefox as FIREFOX_CMD (and xclock).
The above isn't exactly a secure setup, as anyone connecting to the port at the right time could run arbitrary code on your remote server, but it sounds like you're planning to use this for testing purposes anyway. If you want to improve the security, you could make netcat bind to 127.0.0.1 rather than 0.0.0.0, setup an ssh tunnel (run ssh -L3333:localhost:3333 username#remote-host.com to tunnel all traffic received locally on port 3333 to remote-host.com:3333), and let Python connect to ("localhost", 3333).
Now you can combine this with selenium for browser automation:
Follow the instructions from this page, i.e. download the selenium standalone server jar file, put it into /path/to/some/place (on the server), and pip install -U selenium (again, on the server).
Next, put the following code into selenium-example.py in /path/to/some/place:
#!/usr/bin/env python
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.keys import Keys
import time
browser = webdriver.Firefox() # Get local session of firefox
browser.get("http://www.yahoo.com") # Load page
assert "Yahoo" in browser.title
elem = browser.find_element_by_name("p") # Find the query box
elem.send_keys("seleniumhq" + Keys.RETURN)
time.sleep(0.2) # Let the page load, will be added to the API
try:
browser.find_element_by_xpath("//a[contains(#href,'http://docs.seleniumhq.org')]")
except NoSuchElementException:
assert 0, "can't find seleniumhq"
browser.close()
and change the firefox command:
FIREFOX_CMD="cd /path/to/some/place && python selenium-example.py"
And watch firefox do a Yahoo search. You might also want to increase the time.sleep.
If you want to run more programs, you can do things like this before or after running firefox:
# start up xclock, wait for some time to pass, kill it.
s.send("xclock&\n")
time.sleep(1)
s.send("XCLOCK_PID=$!\n") # stash away the process id (into a bash variable)
time.sleep(30)
s.send("echo \"killing $XCLOCK_PID\"\n")
s.send("kill $XCLOCK_PID\n\n")
time.sleep(5)
If you want to do perform general X11 application control, I think you might need to write similar "driver applications", albeit using different libraries. You might want search for "x11 send {mouse|keyboard} events" to find more general approaches. That brings up these questions, but I'm sure there's lots more.
If the remote end isn't responding instantaneously, you might want to sniff your network traffic in Wireshark, and check whether or not TCP is batching up the data, rather than sending it line by line (the \n seems to help here, but I guess there's no guarantee). If this is the case, you might be out of luck, but nothing is impossible. I hope you don't need to go that far though ;-)
One more note: if you need to communicate with CLI programs' STDIN/STDOUT, you might want to look at expect scripting (e.g. using pexpect, or for simple cases you might be able to use subprocess.Popen.communicate](http://docs.python.org/2/library/subprocess.html#subprocess.Popen.communicate)).

Categories

Resources