Is there a quick way to know the status of another computer? - python

I need to know the status of ten computers.
Trying to use "PING",I get the info in ten seconds.
I want a more quick way to get this info in Windows7 64.
code:
from platform import system as system_name # Returns the system/OS name
from os import system as system_call # Execute a shell command
def ping(host):
# Ping parameters as function of OS
parameters = "-n 1" if system_name().lower()=="windows" else "-c 1"
# Pinging
return system_call("ping " + parameters + " " + host) == 0
Thanks!

Try with subprocess
import subprocess
def ping(host):
# Ping parameters as function of OS
parameters = "-n" if system_name().lower()=="windows" else "-c"
# Pinging
return subprocess.Popen(["ping", host, parameters, '1'], stdout=subprocess.PIPE).stdout.read()

Related

nagios core external agent using python scripting

I have a bash script for performing the passive checks i.e., external agent/application. I tried converting the bash script into python but when I execute the file I don't see any kind of responses on my nagios core interface regarding my passive check result.
import os
import datetime
CommandFile='/usr/local/nagios/var/rw/nagios.cmd'
datetime = datetime.datetime.now()
os.stat(CommandFile)
f = open(CommandFile, 'w')
f.write("/bin/echo " + str(datetime) + " PROCESS_SERVICE_CHECK_RESULT;compute-1;python dummy;0;I am dummy python")
f.close()
my bash script code is:
#!/bin/sh
# Write a command to the Nagios command file to cause
# it to process a service check result
echocmd="/bin/echo"
CommandFile="/usr/local/nagios/var/rw/nagios.cmd"
# get the current date/time in seconds since UNIX epoch
datetime=`date +%s`
# create the command line to add to the command file
cmdline="[$datetime] PROCESS_SERVICE_CHECK_RESULT;host-name;dummy bash;0;I am dummy bash"
# append the command to the end of the command file
`$echocmd $cmdline >> $CommandFile`
Changed my code, now its working perfectly fine. I can see the response in the Nagios interface.
import time
import sys
HOSTNAME = "compute-1"
service = "python dummy"
return_code = "0"
text = "python dummy is working .....I am python dummy"
timestamp = int(time.time())
nagios_cmd = open("/usr/local/nagios/var/rw/nagios.cmd", "w")
nagios_cmd.write("[{timestamp}] PROCESS_SERVICE_CHECK_RESULT;{hostname};{service};{return_code};{text}\n".format
(timestamp = timestamp,
hostname = HOSTNAME,
service = service,
return_code = return_code,
text = text))
nagios_cmd.close()

pxssh does not work between compute nodes in a slurm cluster

I'm using the following script for connecting two compute nodes in a slurm cluster.
from getpass import getuser
from socket import gethostname
from pexpect import pxssh
import sys
python = sys.executable
worker_command = "%s -m worker" % python + " %i " + server_socket
pid = 0
children = []
for node, ntasks in node_list.items():
if node == gethostname():
continue
if node != gethostname():
pid_range = range(pid, pid + ntasks)
pid += ntasks
ssh = pxssh.pxssh()
ssh.login(node, getuser())
for worker in pid_range:
ssh.sendline(worker_command % worker + '&')
children.append(ssh)
node_list is a dictionary {'cn000': 28, 'cn001': 28}. worker is a python file placed in the working dictionary.
I expect ssh.sendline to be the same as pexpect.spawn. However, nothing happened after I ran the script.
Although an ssh session was built by ssh.login(node, getuser()), it seems the line ssh.sendline(worker_command % worker) has no effect, because the script to be run by worker_command is not run.
How can I fix this? Or should I try something else?
How can I create one socket on one compute node and connect it with a socket on another compute node?
There is missing a '%s' from the content of worker_command. It contains something like this: "/usr/bin/python3 -m worker" -> worker_command%worker should result in error.
If not (it is possible, because this source looks like a short part of the original program), then add ">>workerprocess.log 2>&1" string before '&', then try to run your program and take a look at workerprocess.log on the server! If your $HOME is writable on the server, you should find the error message(s) in it.

how to use netstat -nb in python

I want to use netstat -nb in python but every code that i write i get the same msg: "The requested operation requires elevation."
The last code that i try is
import os
output_command = os.popen("netstat -nb").readlines()
and i try also
import subprocess
program_list = subprocess.run(["netstat", "-nb"], stdout=subprocess.PIPE).stdout.decode("utf-8")
program_list = program_list.split("\r\n")
import os
a=os.popen('netstat -nb').read()
print("\n Connections ",a )
try this, it's working!

Exit if the called python script encounters an error

I have a central python script that calls various other python scripts and looks like this:
os.system("python " + script1 + args1)
os.system("python " + script2 + args2)
os.system("python " + script3 + args3)
Now, I want to exit from my central script if any of the sub-scripts encounter an error.
What is happening with current code is that let's say script1 encounters an error. The console will display that error and then central script will move onto calling script2 and so on.
I want to display the encountered error and immediately exit my central code.
What is the best way to do this?
Overall this is a terrible way to execute a series of commands from within Python. However here's a minimal way to handle it:
#!python
import os, system
for script, args in some_tuple_of_commands:
exit_code = os.system("python " + script + args)
if exit_code > 0:
print("Error %d running 'python %s %s'" % (
exit_code, script, args), file=sys.stderr)
sys.exit(exit_code)
But, honestly this is all horrible. It's almost always a bad idea to concatenate strings and pass them to your shell for execution from within any programming language.
Look at the subprocess module for much more sane handling of subprocesses in Python.
Also consider trying the sh or the pexpect third party modules depending on what you're trying to do with input or output.
You can try subprocess
import subprocess,sys
try:
output = subprocess.check_output("python test.py", shell=True)
print(output)
except ValueError as e:
print e
sys.exit(0)
print("hello world")
I don't know if it's ideal for you but enclosing these commands in a function seems a good idea to me:
I am using the fact that when a process exits with error os.system(process) returns 256 else it returns 0 as an output respectively.
def runscripts():
if os.system("python " + script1 + args1):return(-1); #Returns -1 if script1 fails and exits.
if os.system("python " + script2 + args2):return(-2); #Returns -2 and exits
if os.system("python " + script3 + args3):return(-3); #Pretty obvious
return(0)
runscripts()
#or if you want to exit the main program
if runscripts():sys.exit(0)
Invoking the operating system like that is a security breach waiting to happen. One should use the subprocess module, because it is more powerful and does not invoke a shell (unless you specifically tell it to). In general, avoid invoking shell whenever possible (see this post).
You can do it like this:
import subprocess
import sys
# create a list of commands
# each command to subprocess.run must be a list of arguments, e.g.
# ["python", "echo.py", "hello"]
cmds = [("python " + script + " " + args).split()
for script, args in [(script1, args1), (script2, args2), (script3,
args3)]]
def captured_run(arglist):
"""Run a subprocess and return the output and returncode."""
proc = subprocess.run( # PIPE captures the output
arglist, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return proc.stdout, proc.stderr, proc.returncode
for cmd in cmds:
stdout, stderr, rc = captured_run(cmd)
# do whatever with stdout, stderr (note that they are bytestrings)
if rc != 0:
sys.exit(rc)
If you don't care about the output, just remove the subprocess.PIPE stuff and return only the returncode from the function. You may also want to add a timeout to the execution, see the subprocess docs linked above for how to do that.

Python Flask, Handling Popen poll / wait / communicate without halting multi-threaded Python

The code below is executed on a certain URL (/new...) and assigns variables to the session cookie, which is used to build the display. This example calls a command using subprocess.Popen.
The problem is that the Popen command called below typically takes 3 minutes - and the subprocess.communicate Waits for the output - during which time all other Flask calls (e.g. another user connecting) are halted. I have some commented lines related to other things I've tried without success - one using the threading module and another using subprocess.poll.
from app import app
from flask import render_template, redirect, session
from subprocess import Popen, PIPE
import threading
#app.route('/new/<number>')
def new_session(number):
get_list(number)
#t = threading.Thread(target=get_list, args=(number))
#t.start()
#t.join()
return redirect('/')
def get_list(number):
#1 Call JAR Get String
command = 'java -jar fetch.jar' + str(number)
print "Executing " + command
stream=Popen(command, shell=False, stdout=PIPE)
#while stream.poll() is None:
# print "stream.poll = " + str(stream.poll())
# time.sleep(1)
stdout,stderr = stream.communicate()
#do some item splits and some processing, left out for brevity
session['data'] = stdout.split("\r\n")
return
What's the "better practice" for handling this situation correctly?
For reference, this code is run in Python 2.7.8 on win32, including Flask 0.10.1
First, you should use a work queue like Celery, RabbitMQ or Redis (here is a helpful hint).
Then, define the get_list function becomes :
#celery.task
def get_list(number):
command = 'java -jar fetch.jar {}'.format(number)
print "Executing " + command
stream = Popen(command, shell=False, stdout=PIPE)
stdout, stderr = stream.communicate()
return stdout.split('\r\n')
And in your view, you wait for the result :
#app.route('/new/<number>')
def new_session(number):
result = get_list.delay(number)
session['data'] = result.wait()
return redirect('/')
Now, it doesn't block your view! :)

Categories

Resources