Been trying to solve this but can't seem to make it work. I want to create a log file that looks like $HOSTNAME-timestamp. For example I have this:
def test_servers():
env.user = getpass.getuser()
env.hosts = ['servernumber1', 'servernumber2']
def logname():
timestamp = time.strftime("%b_%d_%Y_%H:%M:%S")
'env.hosts' + timestamp
def audit():
name = logname()
sys.stdout = open('/home/path/to/audit/directory/%s' % name, 'w')
run('hostname -i')
print 'Checking the uptime of: ', env.host
if run('uptime') < '0':
print(red("it worked for less"))
elif run('uptime') > '0':
print(green("it worked for greater"))
else:
print "WTF?!"
When I run fabric on my fabfile.py to perform "audit" it works just fine but it's not creating the log file with the appended host name at the beginning of the file. It does create a log file for each host defined in test_servers with the timestamp though. Any help would be greatly appreciated.
Looks like env.hosts cannot be defined in function, only env.host_string does.
So I guess maybe you got some error messages like:
No hosts found. Please specify (single) host string for connection:
Related
I am stumped as to what this issue could be. The nagios log does not report any errors anymore but my file does not get written with anything.
def log_something(host_name host_address, attempt_number):
with open('file', 'a+') as log:
log.write('called function at %s with args %s' %s (str(datetime.datetime.now()), locals()))
def main():
parser = argparse.ArgumentParser()
parser.add_argument('host_name')
parser.add_argument('host_address')
parser.add_argument('attempt_number')
args = parser.parse_args()
log_something(args.host_name, args.host_address, args.attempt_number)
if __name__ == "__main__":
main()
And in my commands.cfg
define command {
command_name my_command
command_line $USER1$/my_command.py $HOSTNAME$ $HOSTADDRESS$ $HOSTATTEMPT$
}
And in my host config
define host {
...
event_handler my_command
}
And in the nagios log (journalctl -xe)
OST ALERT: test-router;UP;HARD;5;PING OK - Packet loss = 0%, RTA = 0.96 ms
Jan 31 15:38:47 nagios-server.nagios[9212]: HOST EVENT HANDLER: test-router;UP;HARD;5;my_command
Nothing is written to the file, no errors are reported. When there were errors in my syntax the nagios log would print the errors that were reported to stderr, one of which was a file permission issue. I fixed that by creating the file in the same folder and chmod 777 everything. Besides if that was an issue it should be logged.
Anyone have any idea what's going on here?
I figured something out. It seems as if you define where the output of nagios event handlers goes in the nagios.cfg file.
# TEMP PATH
# This is path where Nagios can create temp files for service and
# host check results, etc.
temp_path=/tmp
I think this means that the output of the plugin checks get sent to the temp folder. However, changing that path seems to have no effect and the result of my output in the python script still gets written as file in the /tmp folder. This is CentOS7.
Check you function, it seem you missing the "," in function parameters.
def log_something(host_name host_address, attempt_number):
with open('file', 'a+') as log:
I had a script that was working. I made one small change and now it stopped working. The top version works, while the bottom one fails.
def makelocalconfig(file="TEXT"):
host = env.host_string
filename = file
conf = open('/home/myuser/verify_yslog_conf/%s/%s' % (host, filename), 'r')
comment = open('/home/myuser/verify_yslog_conf/%s/localconfig.txt' % host, 'w')
for line in conf:
comment.write(line)
comment.close()
conf.close()
def makelocalconfig(file="TEXT"):
host = env.host_string
filename = file
path = host + "/" + filename
pwd = local("pwd")
conf = open('%s/%s' % (pwd, path), 'r')
comment = open('%s/%s/localconfig.txt' % (pwd, host), 'w')
for line in conf:
comment.write(line)
comment.close()
conf.close()
For troubleshooting purposes I added a print pwd and print path line to make sure the variables were getting filled correctly. pwd comes up empty. Why isn't this variable being set correctly? I use this same format of
var = sudo("cmd")
all the time. Is local different than sudo and run?
In short, you may need to add capture=True:
pwd = local("pwd", capture=True)
local runs a command locally:
a convenience wrapper around the use of the builtin Python subprocess
module with shell=True activated.
run runs a command on a remote server and sudo runs a remote command as super-user.
There is also a note in the documentation:
local is not currently capable of simultaneously printing and capturing output, as run/sudo do. The capture kwarg allows you to switch between printing and capturing as necessary, and defaults to False.
When capture=False, the local subprocess’ stdout and stderr streams are hooked up directly to your terminal, though you may use the global output controls output.stdout and output.stderr to hide one or both if desired. In this mode, the return value’s stdout/stderr values are always empty.
It seems easy enough in python to append data to an existing file (locally), although not so easy to do it remotely (at least that I've found). Is there some straight forward method for accomplishing this?
I tried using:
import subprocess
cmd = ['ssh', 'user#example.com',
'cat - > /path/to/file/append.txt']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE)
inmem_data = 'foobar\n'
for chunk_ix in range(0, len(inmem_data), 1024):
chunk = inmem_data[chunk_ix:chunk_ix + 1024]
p.stdin.write(chunk)
But maybe that's not the way to do it; so I tried posting a query:
import urllib
import urllib2
query_args = { 'q':'query string', 'foo':'bar' }
request = urllib2.Request('http://example.com:8080/')
print 'Request method before data:', request.get_method()
request.add_data(urllib.urlencode(query_args))
print 'Request method after data :', request.get_method()
request.add_header('User-agent', 'PyMOTW (http://example.com/)')
print
print 'OUTGOING DATA:'
print request.get_data()
print
print 'SERVER RESPONSE:'
print urllib2.urlopen(request).read()
But I get connection refused, so I would obviously need some type of form handler, which unfortunately I have no knowledge about. Is there recommended way to accomplish this? Thanks.
If I understands correctly you are trying to append a remote file to a local file...
I'd recommend to use fabric... http://www.fabfile.org/
I've tried this with text files and it works great.
Remember to install fabric before running the script:
pip install fabric
Append a remote file to a local file (I think it's self explanatory):
from fabric.api import (cd, env)
from fabric.operations import get
env.host_string = "127.0.0.1:2222"
env.user = "jfroco"
env.password = "********"
remote_path = "/home/jfroco/development/fabric1"
remote_file = "test.txt"
local_file = "local.txt"
lf = open(local_file, "a")
with cd(remote_path):
get(remote_file, lf)
lf.close()
Run it as any python file (it is not necessary to use "fab" application)
Hope this helps
EDIT: New script that write a variable at the end of a remote file:
Again, it is super simple using Fabric
from fabric.api import (cd, env, run)
from time import time
env.host_string = "127.0.0.1:2222"
env.user = "jfroco"
env.password = "*********"
remote_path = "/home/jfroco/development/fabric1"
remote_file = "test.txt"
variable = "My time is %s" % time()
with cd(remote_path):
run("echo '%s' >> %s" % (variable, remote_file))
In the example I use time.time() but could be anything.
At the time of posting this, the first script above (posted by #Juan Fco. Roco) didn't work for me. What worked for me instead is as follows:
from fabric import Connection
my_host = '127.0.0.1'
my_username = "jfroco"
my_password = '*********'
remote_file_path = "/home/jfroco/development/fabric1/test.txt"
local_file_path = "local.txt"
ssh_conn = Connection(host=my_host,
user=my_username,
connect_kwargs={"password": my_password}
)
with ssh_conn as my_ssh_conn:
local_log_file_obj = open(local_file_path, 'ab', encoding="utf_8")
my_ssh_conn.get(remote_file_path, local_log_file_obj)
local_log_file_obj.close()
The main difference is 'ab' (append in binary mode) instead of 'a'.
I have created this below script and it works fine. But the output is not friendly (see below). I want the first line to display only the hostname and IP and remove (,'[], please suggest
('testhostname', [], ['10.10.10.10'])
cannot resolve hostname: 10.10.10.11
import socket
pfile = open ('C:\\Python27\\scripts\\test.txt')
while True:
IP = pfile.readline()
if not IP:
break
try:
host = socket.gethostbyaddr(IP.rstrip())
print host
except socket.herror, err:
print "cannot resolve hostname: ", IP
pfile.close()
Rather than printing all of the host tuple that is returned by gethostbyaddr, I suggest unpacking into separate variables that you can then print as you see fit:
hostname, alias_list, ip_addr_list = gethostbyaddr(IP.rstrip())
print hostname, ip_addr_list # or ip_addr_list[0] if you only want the one IP
If you want more control over the formatting, I suggest using the str.format method:
print "hostname: {}, IP(s): {}".format(hostname, ", ".join(ip_addr_list))
Also, a few other code suggestions (not directly related to your main question):
Use a with statement rather than manually opening and closing your file.
Iterate on the file object directly (with for IP in pfile:), rather than using while True: and calling pfile.readline() each time through.
Use the syntax except socek.herror as err rather than the older form with commas (which is deprecated in Python 2 and no longer exists in Python 3).
Is there any way to get this to work with env.hosts? As opposed to having to loop manually whenever I have multiple hosts to run this on?
I am trying to use the fabric api, to not have to use the very inconvenient and kludgey fabric command line call. I set the env.hosts variable in one module/class and then call a another class instance method to run a fabric command. In the called class instance I can print out the env.hosts list. Yet when I try to run a command it tells me it can't find a host.
If I loop through the env.hosts array and manually set the env.host variable for each host in the env.hosts array, I can get the run command to work. What is odd is that I also set the env.user variable in the calling class and it is picked up.
e.g. this works:
def upTest(self):
print('env.hosts = ' + str(env.hosts))
for host in env.hosts:
env.host_string = host
print('env.host_string = ' + env.host_string)
run("uptime")
output from this:
env.hosts = ['ec2-....amazonaws.com']
env.host_string = ec2-....amazonaws.com
[ec2-....amazonaws.com] run: uptime
[ec2-....amazonaws.com] out: 18:21:15 up 2 days, 2:13, 1 user, load average: 0.00, 0.01, 0.05
[ec2-....amazonaws.com] out:
This doesn't work... but it does work if you run it from a "fab" file... makes no sense to me.
def upTest(self):
print('env.hosts = ' + str(env.hosts))
run("uptime")
This is the output:
No hosts found. Please specify (single) host string for connection:
I did try putting an #task decorator on the method (and removing the 'self' reference since the decorator didn't like that). But to no help.
Is there any way to get this to work with env.hosts? As opposed to having to loop manually whenever I have multiple hosts to run this on?
Finally, I fixed this problem by using execute() and exec.
main.py
#!/usr/bin/env python
from demo import FabricSupport
hosts = ['localhost']
myfab = FabricSupport()
myfab.execute("df",hosts)
demo.py
#!/usr/bin/env python
from fabric.api import env, run, execute
class FabricSupport:
def __init__(self):
pass
def hostname(self):
run("hostname")
def df(self):
run("df -h")
def execute(self,task,hosts):
get_task = "task = self.%s" % task
exec get_task
execute(task,hosts=hosts)
python main.py
[localhost] Executing task 'hostname'
[localhost] run: hostname
[localhost] out: heydevops-workspace
I've found that it's best not to set env.hosts in code but instead to define roles based on your config file and use the fab tool to specify a role. It worked for me
my_roles.json
{
"web": [ "user#web1.example.com", "user#web2.example.com" ],
"db": [ "user#db1.example.com", "user#db2.example.com" ]
}
fabfile.py
from fabric.api import env, run, task
import json
def load_roles():
with open('my_roles.json') as f:
env.roledefs = json.load(f)
load_roles()
#task
def my_task():
run("hostname")
CLI
fab -R web my_task
output from running my_task for each of web1 and web2 is here