Python appending file remotely - python

It seems easy enough in python to append data to an existing file (locally), although not so easy to do it remotely (at least that I've found). Is there some straight forward method for accomplishing this?
I tried using:
import subprocess
cmd = ['ssh', 'user#example.com',
'cat - > /path/to/file/append.txt']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE)
inmem_data = 'foobar\n'
for chunk_ix in range(0, len(inmem_data), 1024):
chunk = inmem_data[chunk_ix:chunk_ix + 1024]
p.stdin.write(chunk)
But maybe that's not the way to do it; so I tried posting a query:
import urllib
import urllib2
query_args = { 'q':'query string', 'foo':'bar' }
request = urllib2.Request('http://example.com:8080/')
print 'Request method before data:', request.get_method()
request.add_data(urllib.urlencode(query_args))
print 'Request method after data :', request.get_method()
request.add_header('User-agent', 'PyMOTW (http://example.com/)')
print
print 'OUTGOING DATA:'
print request.get_data()
print
print 'SERVER RESPONSE:'
print urllib2.urlopen(request).read()
But I get connection refused, so I would obviously need some type of form handler, which unfortunately I have no knowledge about. Is there recommended way to accomplish this? Thanks.

If I understands correctly you are trying to append a remote file to a local file...
I'd recommend to use fabric... http://www.fabfile.org/
I've tried this with text files and it works great.
Remember to install fabric before running the script:
pip install fabric
Append a remote file to a local file (I think it's self explanatory):
from fabric.api import (cd, env)
from fabric.operations import get
env.host_string = "127.0.0.1:2222"
env.user = "jfroco"
env.password = "********"
remote_path = "/home/jfroco/development/fabric1"
remote_file = "test.txt"
local_file = "local.txt"
lf = open(local_file, "a")
with cd(remote_path):
get(remote_file, lf)
lf.close()
Run it as any python file (it is not necessary to use "fab" application)
Hope this helps
EDIT: New script that write a variable at the end of a remote file:
Again, it is super simple using Fabric
from fabric.api import (cd, env, run)
from time import time
env.host_string = "127.0.0.1:2222"
env.user = "jfroco"
env.password = "*********"
remote_path = "/home/jfroco/development/fabric1"
remote_file = "test.txt"
variable = "My time is %s" % time()
with cd(remote_path):
run("echo '%s' >> %s" % (variable, remote_file))
In the example I use time.time() but could be anything.

At the time of posting this, the first script above (posted by #Juan Fco. Roco) didn't work for me. What worked for me instead is as follows:
from fabric import Connection
my_host = '127.0.0.1'
my_username = "jfroco"
my_password = '*********'
remote_file_path = "/home/jfroco/development/fabric1/test.txt"
local_file_path = "local.txt"
ssh_conn = Connection(host=my_host,
user=my_username,
connect_kwargs={"password": my_password}
)
with ssh_conn as my_ssh_conn:
local_log_file_obj = open(local_file_path, 'ab', encoding="utf_8")
my_ssh_conn.get(remote_file_path, local_log_file_obj)
local_log_file_obj.close()
The main difference is 'ab' (append in binary mode) instead of 'a'.

Related

Get local DNS settings in Python

Is there any elegant and cross platform (Python) way to get the local DNS settings?
It could probably work with a complex combination of modules such as platform and subprocess, but maybe there is already a good module, such as netifaces which can retrieve it in low-level and save some "reinventing the wheel" effort.
Less ideally, one could probably query something like dig, but I find it "noisy", because it would run an extra request instead of just retrieving something which exists already locally.
Any ideas?
Using subprocess you could do something like this, in a MacBook or Linux system
import subprocess
process = subprocess.Popen(['cat', '/etc/resolv.conf'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
print(stdout, stderr)
or do something like this
import subprocess
with open('dns.txt', 'w') as f:
process = subprocess.Popen(['cat', '/etc/resolv.conf'], stdout=f)
The first output will go to stdout and the second to a file
Maybe this one will solve your problem
import subprocess
def get_local_dns(cmd_):
with open('dns1.txt', 'w+') as f:
with open('dns_log1.txt', 'w+') as flog:
try:
process = subprocess.Popen(cmd_, stdout=f, stderr=flog)
except FileNotFoundError as e:
flog.write(f"Error while executing this command {str(e)}")
linux_cmd = ['cat', '/etc/resolv.conf']
windows_cmd = ['windows_command', 'parameters']
commands = [linux_cmd, windows_cmd]
if __name__ == "__main__":
for cmd in commands:
get_local_dns(cmd)
Thanks #MasterOfTheHouse.
I ended up writing my own function. It's not so elegant, but it does the job for now. There's plenty of room for improvement, but well...
import os
import subprocess
def get_dns_settings()->dict:
# Initialize the output variables
dns_ns, dns_search = [], ''
# For Unix based OSs
if os.path.isfile('/etc/resolv.conf'):
for line in open('/etc/resolv.conf','r'):
if line.strip().startswith('nameserver'):
nameserver = line.split()[1].strip()
dns_ns.append(nameserver)
elif line.strip().startswith('search'):
search = line.split()[1].strip()
dns_search = search
# If it is not a Unix based OS, try "the Windows way"
elif os.name == 'nt':
cmd = 'ipconfig /all'
raw_ipconfig = subprocess.check_output(cmd)
# Convert the bytes into a string
ipconfig_str = raw_ipconfig.decode('cp850')
# Convert the string into a list of lines
ipconfig_lines = ipconfig_str.split('\n')
for n in range(len(ipconfig_lines)):
line = ipconfig_lines[n]
# Parse nameserver in current line and next ones
if line.strip().startswith('DNS-Server'):
nameserver = ':'.join(line.split(':')[1:]).strip()
dns_ns.append(nameserver)
next_line = ipconfig_lines[n+1]
# If there's too much blank at the beginning, assume we have
# another nameserver on the next line
if len(next_line) - len(next_line.strip()) > 10:
dns_ns.append(next_line.strip())
next_next_line = ipconfig_lines[n+2]
if len(next_next_line) - len(next_next_line.strip()) > 10:
dns_ns.append(next_next_line.strip())
elif line.strip().startswith('DNS-Suffix'):
dns_search = line.split(':')[1].strip()
return {'nameservers': dns_ns, 'search': dns_search}
print(get_dns_settings())
By the way... how did you manage to write two answers with the same account?

nagios core external agent using python scripting

I have a bash script for performing the passive checks i.e., external agent/application. I tried converting the bash script into python but when I execute the file I don't see any kind of responses on my nagios core interface regarding my passive check result.
import os
import datetime
CommandFile='/usr/local/nagios/var/rw/nagios.cmd'
datetime = datetime.datetime.now()
os.stat(CommandFile)
f = open(CommandFile, 'w')
f.write("/bin/echo " + str(datetime) + " PROCESS_SERVICE_CHECK_RESULT;compute-1;python dummy;0;I am dummy python")
f.close()
my bash script code is:
#!/bin/sh
# Write a command to the Nagios command file to cause
# it to process a service check result
echocmd="/bin/echo"
CommandFile="/usr/local/nagios/var/rw/nagios.cmd"
# get the current date/time in seconds since UNIX epoch
datetime=`date +%s`
# create the command line to add to the command file
cmdline="[$datetime] PROCESS_SERVICE_CHECK_RESULT;host-name;dummy bash;0;I am dummy bash"
# append the command to the end of the command file
`$echocmd $cmdline >> $CommandFile`
Changed my code, now its working perfectly fine. I can see the response in the Nagios interface.
import time
import sys
HOSTNAME = "compute-1"
service = "python dummy"
return_code = "0"
text = "python dummy is working .....I am python dummy"
timestamp = int(time.time())
nagios_cmd = open("/usr/local/nagios/var/rw/nagios.cmd", "w")
nagios_cmd.write("[{timestamp}] PROCESS_SERVICE_CHECK_RESULT;{hostname};{service};{return_code};{text}\n".format
(timestamp = timestamp,
hostname = HOSTNAME,
service = service,
return_code = return_code,
text = text))
nagios_cmd.close()

Python Fabric won't pass in variable

I had a script that was working. I made one small change and now it stopped working. The top version works, while the bottom one fails.
def makelocalconfig(file="TEXT"):
host = env.host_string
filename = file
conf = open('/home/myuser/verify_yslog_conf/%s/%s' % (host, filename), 'r')
comment = open('/home/myuser/verify_yslog_conf/%s/localconfig.txt' % host, 'w')
for line in conf:
comment.write(line)
comment.close()
conf.close()
def makelocalconfig(file="TEXT"):
host = env.host_string
filename = file
path = host + "/" + filename
pwd = local("pwd")
conf = open('%s/%s' % (pwd, path), 'r')
comment = open('%s/%s/localconfig.txt' % (pwd, host), 'w')
for line in conf:
comment.write(line)
comment.close()
conf.close()
For troubleshooting purposes I added a print pwd and print path line to make sure the variables were getting filled correctly. pwd comes up empty. Why isn't this variable being set correctly? I use this same format of
var = sudo("cmd")
all the time. Is local different than sudo and run?
In short, you may need to add capture=True:
pwd = local("pwd", capture=True)
local runs a command locally:
a convenience wrapper around the use of the builtin Python subprocess
module with shell=True activated.
run runs a command on a remote server and sudo runs a remote command as super-user.
There is also a note in the documentation:
local is not currently capable of simultaneously printing and capturing output, as run/sudo do. The capture kwarg allows you to switch between printing and capturing as necessary, and defaults to False.
When capture=False, the local subprocess’ stdout and stderr streams are hooked up directly to your terminal, though you may use the global output controls output.stdout and output.stderr to hide one or both if desired. In this mode, the return value’s stdout/stderr values are always empty.

Remote Directory Listing Python WSGI

I'm currently an Intern in a IT service and I've been asked to build a web based app using Python that will run on a Linux environment. This web app has to be WSGI-compliant and I cannot use any framework.
My issue currently is that I want to have a variable set as a list of files in the said directory. Therefore I can then proceed to list those files by printing a table having each row being a file.
I am aware of os.listdir() but can't find a way to use it on a remote server (which is supposed not to be the case considering what google searches showed me...).
I tried an os.system(ssh root#someip:/path/to/dir/) but as python doc states, I cant get the output I want as it returns some integers...
Below is a piece of my script.
#ip is = to the ip of the server I want to list.
ip = 192..............
directory = "/var/lib/libvirt/images/"
command = "ssh root#"+ip+" ls "+directory
dirs = os.system(command)
files = ""
table_open = "<table>"
table_close = "</table>"
table_title_open = "<th>Server: "
table_title_close = "</th>"
tr_open = "<tr>"
tr_close = "</tr>"
td_open = "<td>"
td_close = "</td>"
input_open = "<input type='checkbox' name='choice' value='"
input_close = "'>"
#If i don't put dirs in brackets it raises an error (dirs not being iterable)
for file in [dirs]:
files = files + tr_open+td_open+file+td_close+td_open+input_open+file+input_close+td_close+tr_close
table = table_open+table_title_open+str(num_server)+table_title_close+files+table_close
I've tried this with a local directory (with os.listdir) and it works perfectly. I am having troubles only with remote directory listing...
I do hope that my question is crystal clear, if not I'll do my best to be more accurate.
Thanks in advance,
-Karink.
You can use subprocess module, here is an example:
import subprocess
ls = subprocess.Popen(['ssh','user#xx.xx.xx.xx', 'ls'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = ls.communicate()
print out
print err
You may also use pysftp
first install it using pip install pysftp then the below code can list the files on remote linux machine from windows as well
import pysftp
cnopts = pysftp.CnOpts()
cnopts.hostkeys = None
with pysftp.Connection('ipLinuxmachine', username='username', password='passwd',cnopts=cnopts) as sftp:
out=sftp.execute('cd path of directory ; ls')
print out

Python - fabric log file creation using definitions and env.hosts

Been trying to solve this but can't seem to make it work. I want to create a log file that looks like $HOSTNAME-timestamp. For example I have this:
def test_servers():
env.user = getpass.getuser()
env.hosts = ['servernumber1', 'servernumber2']
def logname():
timestamp = time.strftime("%b_%d_%Y_%H:%M:%S")
'env.hosts' + timestamp
def audit():
name = logname()
sys.stdout = open('/home/path/to/audit/directory/%s' % name, 'w')
run('hostname -i')
print 'Checking the uptime of: ', env.host
if run('uptime') < '0':
print(red("it worked for less"))
elif run('uptime') > '0':
print(green("it worked for greater"))
else:
print "WTF?!"
When I run fabric on my fabfile.py to perform "audit" it works just fine but it's not creating the log file with the appended host name at the beginning of the file. It does create a log file for each host defined in test_servers with the timestamp though. Any help would be greatly appreciated.
Looks like env.hosts cannot be defined in function, only env.host_string does.
So I guess maybe you got some error messages like:
No hosts found. Please specify (single) host string for connection:

Categories

Resources