Is there any elegant and cross platform (Python) way to get the local DNS settings?
It could probably work with a complex combination of modules such as platform and subprocess, but maybe there is already a good module, such as netifaces which can retrieve it in low-level and save some "reinventing the wheel" effort.
Less ideally, one could probably query something like dig, but I find it "noisy", because it would run an extra request instead of just retrieving something which exists already locally.
Any ideas?
Using subprocess you could do something like this, in a MacBook or Linux system
import subprocess
process = subprocess.Popen(['cat', '/etc/resolv.conf'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
print(stdout, stderr)
or do something like this
import subprocess
with open('dns.txt', 'w') as f:
process = subprocess.Popen(['cat', '/etc/resolv.conf'], stdout=f)
The first output will go to stdout and the second to a file
Maybe this one will solve your problem
import subprocess
def get_local_dns(cmd_):
with open('dns1.txt', 'w+') as f:
with open('dns_log1.txt', 'w+') as flog:
try:
process = subprocess.Popen(cmd_, stdout=f, stderr=flog)
except FileNotFoundError as e:
flog.write(f"Error while executing this command {str(e)}")
linux_cmd = ['cat', '/etc/resolv.conf']
windows_cmd = ['windows_command', 'parameters']
commands = [linux_cmd, windows_cmd]
if __name__ == "__main__":
for cmd in commands:
get_local_dns(cmd)
Thanks #MasterOfTheHouse.
I ended up writing my own function. It's not so elegant, but it does the job for now. There's plenty of room for improvement, but well...
import os
import subprocess
def get_dns_settings()->dict:
# Initialize the output variables
dns_ns, dns_search = [], ''
# For Unix based OSs
if os.path.isfile('/etc/resolv.conf'):
for line in open('/etc/resolv.conf','r'):
if line.strip().startswith('nameserver'):
nameserver = line.split()[1].strip()
dns_ns.append(nameserver)
elif line.strip().startswith('search'):
search = line.split()[1].strip()
dns_search = search
# If it is not a Unix based OS, try "the Windows way"
elif os.name == 'nt':
cmd = 'ipconfig /all'
raw_ipconfig = subprocess.check_output(cmd)
# Convert the bytes into a string
ipconfig_str = raw_ipconfig.decode('cp850')
# Convert the string into a list of lines
ipconfig_lines = ipconfig_str.split('\n')
for n in range(len(ipconfig_lines)):
line = ipconfig_lines[n]
# Parse nameserver in current line and next ones
if line.strip().startswith('DNS-Server'):
nameserver = ':'.join(line.split(':')[1:]).strip()
dns_ns.append(nameserver)
next_line = ipconfig_lines[n+1]
# If there's too much blank at the beginning, assume we have
# another nameserver on the next line
if len(next_line) - len(next_line.strip()) > 10:
dns_ns.append(next_line.strip())
next_next_line = ipconfig_lines[n+2]
if len(next_next_line) - len(next_next_line.strip()) > 10:
dns_ns.append(next_next_line.strip())
elif line.strip().startswith('DNS-Suffix'):
dns_search = line.split(':')[1].strip()
return {'nameservers': dns_ns, 'search': dns_search}
print(get_dns_settings())
By the way... how did you manage to write two answers with the same account?
Related
This question already has answers here:
Running shell command and capturing the output
(21 answers)
Closed 2 years ago.
I want to assign the output of a command I run using os.system to a variable and prevent it from being output to the screen. But, in the below code ,the output is sent to the screen and the value printed for var is 0, which I guess signifies whether the command ran successfully or not. Is there any way to assign the command output to the variable and also stop it from being displayed on the screen?
var = os.system("cat /etc/services")
print var #Prints 0
From this question which I asked a long time ago, what you may want to use is popen:
os.popen('cat /etc/services').read()
From the docs for Python 3.6,
This is implemented using subprocess.Popen; see that class’s
documentation for more powerful ways to manage and communicate with
subprocesses.
Here's the corresponding code for subprocess:
import subprocess
proc = subprocess.Popen(["cat", "/etc/services"], stdout=subprocess.PIPE, shell=True)
(out, err) = proc.communicate()
print("program output:", out)
You might also want to look at the subprocess module, which was built to replace the whole family of Python popen-type calls.
import subprocess
output = subprocess.check_output("cat /etc/services", shell=True)
The advantage it has is that there is a ton of flexibility with how you invoke commands, where the standard in/out/error streams are connected, etc.
The commands module is a reasonably high-level way to do this:
import commands
status, output = commands.getstatusoutput("cat /etc/services")
status is 0, output is the contents of /etc/services.
For python 3.5+ it is recommended that you use the run function from the subprocess module. This returns a CompletedProcess object, from which you can easily obtain the output as well as return code. Since you are only interested in the output, you can write a utility wrapper like this.
from subprocess import PIPE, run
def out(command):
result = run(command, stdout=PIPE, stderr=PIPE, universal_newlines=True, shell=True)
return result.stdout
my_output = out("echo hello world")
# Or
my_output = out(["echo", "hello world"])
I know this has already been answered, but I wanted to share a potentially better looking way to call Popen via the use of from x import x and functions:
from subprocess import PIPE, Popen
def cmdline(command):
process = Popen(
args=command,
stdout=PIPE,
shell=True
)
return process.communicate()[0]
print cmdline("cat /etc/services")
print cmdline('ls')
print cmdline('rpm -qa | grep "php"')
print cmdline('nslookup google.com')
I do it with os.system temp file:
import tempfile, os
def readcmd(cmd):
ftmp = tempfile.NamedTemporaryFile(suffix='.out', prefix='tmp', delete=False)
fpath = ftmp.name
if os.name=="nt":
fpath = fpath.replace("/","\\") # forwin
ftmp.close()
os.system(cmd + " > " + fpath)
data = ""
with open(fpath, 'r') as file:
data = file.read()
file.close()
os.remove(fpath)
return data
Python 2.6 and 3 specifically say to avoid using PIPE for stdout and stderr.
The correct way is
import subprocess
# must create a file object to store the output. Here we are getting
# the ssid we are connected to
outfile = open('/tmp/ssid', 'w');
status = subprocess.Popen(["iwgetid"], bufsize=0, stdout=outfile)
outfile.close()
# now operate on the file
from os import system, remove
from uuid import uuid4
def bash_(shell_command: str) -> tuple:
"""
:param shell_command: your shell command
:return: ( 1 | 0, stdout)
"""
logfile: str = '/tmp/%s' % uuid4().hex
err: int = system('%s &> %s' % (shell_command, logfile))
out: str = open(logfile, 'r').read()
remove(logfile)
return err, out
# Example:
print(bash_('cat /usr/bin/vi | wc -l'))
>>> (0, '3296\n')```
I have a script that I want to run from within Python (2.6.5) that follows the logic below:
Prompts the user for a password. It looks like ("Enter password: ") (*Note: Input does not echo to screen)
Output irrelevant information
Prompt the user for a response ("Blah Blah filename.txt blah blah (Y/N)?: ")
The last prompt line contains text which I need to parse (filename.txt). The response provided doesn't matter (the program could actually exit here without providing one, as long as I can parse the line).
My requirements are somewhat similar to Wrapping an interactive command line application in a Python script, but the responses there seem a bit confusing, and mine still hangs even when the OP mentions that it doesn't for him.
Through looking around, I've come to the conclusion that subprocess is the best way of doing this, but I'm having a few issues. Here is my Popen line:
p = subprocess.Popen("cmd", shell=True, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, stdin=subprocess.PIPE)
When I call a read() or readline() on stdout, the prompt is printer to the screen and it hangs.
If I call a write("password\n") for stdin, the prompt is written to the screen and it hangs. The text in write() is not written (I don't the cursor move the a new line).
If I call p.communicate("password\n"), same behavior as write()
I was looking for a few ideas here on the best way to input to stdin and possibly how to parse the last line in the output if your feeling generous, though I could probably figure that out eventually.
If you are communicating with a program that subprocess spawns, you should check out A non-blocking read on a subprocess.PIPE in Python. I had a similar problem with my application and found using queues to be the best way to do ongoing communication with a subprocess.
As for getting values from the user, you can always use the raw_input() builtin to get responses, and for passwords, try using the getpass module to get non-echoing passwords from your user. You can then parse those responses and write them to your subprocess' stdin.
I ended up doing something akin to the following:
import sys
import subprocess
from threading import Thread
try:
from Queue import Queue, Empty
except ImportError:
from queue import Queue, Empty # Python 3.x
def enqueue_output(out, queue):
for line in iter(out.readline, b''):
queue.put(line)
out.close()
def getOutput(outQueue):
outStr = ''
try:
while True: # Adds output from the Queue until it is empty
outStr+=outQueue.get_nowait()
except Empty:
return outStr
p = subprocess.Popen("cmd", stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=False, universal_newlines=True)
outQueue = Queue()
errQueue = Queue()
outThread = Thread(target=enqueue_output, args=(p.stdout, outQueue))
errThread = Thread(target=enqueue_output, args=(p.stderr, errQueue))
outThread.daemon = True
errThread.daemon = True
outThread.start()
errThread.start()
try:
someInput = raw_input("Input: ")
except NameError:
someInput = input("Input: ")
p.stdin.write(someInput)
errors = getOutput(errQueue)
output = getOutput(outQueue)
Once you have the queues made and the threads started, you can loop through getting input from the user, getting errors and output from the process, and processing and displaying them to the user.
Using threading it might be slightly overkill for simple tasks.
Instead os.spawnvpe can be used. It will spawn script shell as a process. You will be able to communicate interactively with the script.
In this example I passed password as an argument, obviously that is not a good idea.
import os
import sys
from getpass import unix_getpass
def cmd(cmd):
cmd = cmd.split()
code = os.spawnvpe(os.P_WAIT, cmd[0], cmd, os.environ)
if code == 127:
sys.stderr.write('{0}: command not found\n'.format(cmd[0]))
return code
password = unix_getpass('Password: ')
cmd_run = './run.sh --password {0}'.format(password)
cmd(cmd_run)
pattern = raw_input('Pattern: ')
lines = []
with open('filename.txt', 'r') as fd:
for line in fd:
if pattern in line:
lines.append(line)
# manipulate lines
If you just want a user to enter a password without it being echoed to the screen just use the standard library's getpass module:
import getpass
print("You entered:", getpass.getpass())
NOTE:The prompt for this function defaults to "Password: " also this will only work on command lines where echoing can be controlled. So if it doesn't work try running it from terminal.
This is probably a bit of a silly excercise for me, but it raises a bunch of interesting questions. I have a directory of logfiles from my chat client, and I want to be notified using notify-osd every time one of them changes.
The script that I wrote basically uses os.popen to run the linux tail command on every one of the files to get the last line, and then check each line against a dictionary of what the lines were the last time it ran. If the line changed, it used pynotify to send me a notification.
This script actually worked perfectly, except for the fact that it used a huge amount of cpu (probably because it was running tail about 16 times every time the loop ran, on files that were mounted over sshfs.)
It seems like something like this would be a great solution, but I don't see how to implement that for more than one file.
Here is the script that I wrote. Pardon my lack of comments and poor style.
Edit: To clarify, this is all linux on a desktop.
Not even looking at your source code, there are two ways you could easily do this more efficiently and handle multiple files.
Don't bother running tail unless you have to. Simply os.stat all of the files and record the last modified time. If the last modified time is different, then raise a notification.
Use pyinotify to call out to Linux's inotify facility; this will have the kernel do option 1 for you and call back to you when any files in your directory change. Then translate the callback into your osd notification.
Now, there might be some trickiness depending on how many notifications you want when there are multiple messages and whether you care about missing a notification for a message.
An approach that preserves the use of tail would be to instead use tail -f. Open all of the files with tail -f and then use the select module to have the OS tell you when there's additional input on one of the file descriptors open for tail -f. Your main loop would call select and then iterate over each of the readable descriptors to generate notifications. (You could probably do this without using tail and just calling readline() when it's readable.)
Other areas of improvement in your script:
Use os.listdir and native Python filtering (say, using list comprehensions) instead of a popen with a bunch of grep filters.
Update the list of buffers to scan periodically instead of only doing it at program boot.
Use subprocess.popen instead of os.popen.
If you're already using the pyinotify module, it's easy to do this in pure Python (i.e. no need to spawn a separate process to tail each file).
Here is an example that is event-driven by inotify, and should use very little cpu. When IN_MODIFY occurs for a given path we read all available data from the file handle and output any complete lines found, buffering the incomplete line until more data is available:
import os
import select
import sys
import pynotify
import pyinotify
class Watcher(pyinotify.ProcessEvent):
def __init__(self, paths):
self._manager = pyinotify.WatchManager()
self._notify = pyinotify.Notifier(self._manager, self)
self._paths = {}
for path in paths:
self._manager.add_watch(path, pyinotify.IN_MODIFY)
fh = open(path, 'rb')
fh.seek(0, os.SEEK_END)
self._paths[os.path.realpath(path)] = [fh, '']
def run(self):
while True:
self._notify.process_events()
if self._notify.check_events():
self._notify.read_events()
def process_default(self, evt):
path = evt.pathname
fh, buf = self._paths[path]
data = fh.read()
lines = data.split('\n')
# output previous incomplete line.
if buf:
lines[0] = buf + lines[0]
# only output the last line if it was complete.
if lines[-1]:
buf = lines[-1]
lines.pop()
# display a notification
notice = pynotify.Notification('%s changed' % path, '\n'.join(lines))
notice.show()
# and output to stdout
for line in lines:
sys.stdout.write(path + ': ' + line + '\n')
sys.stdout.flush()
self._paths[path][1] = buf
pynotify.init('watcher')
paths = sys.argv[1:]
Watcher(paths).run()
Usage:
% python watcher.py [path1 path2 ... pathN]
Simple pure python solution (not the best, but doesn't fork, spits out 4 empty lines after idle period and marks everytime the source of the chunk, if changed):
#!/usr/bin/env python
from __future__ import with_statement
'''
Implement multi-file tail
'''
import os
import sys
import time
def print_file_from(filename, pos):
with open(filename, 'rb') as fh:
fh.seek(pos)
while True:
chunk = fh.read(8192)
if not chunk:
break
sys.stdout.write(chunk)
def _fstat(filename):
st_results = os.stat(filename)
return (st_results[6], st_results[8])
def _print_if_needed(filename, last_stats, no_fn, last_fn):
changed = False
#Find the size of the file and move to the end
tup = _fstat(filename)
# print tup
if last_stats[filename] != tup:
changed = True
if not no_fn and last_fn != filename:
print '\n<%s>' % filename
print_file_from(filename, last_stats[filename][0])
last_stats[filename] = tup
return changed
def multi_tail(filenames, stdout=sys.stdout, interval=1, idle=10, no_fn=False):
S = lambda (st_size, st_mtime): (max(0, st_size - 124), st_mtime)
last_stats = dict((fn, S(_fstat(fn))) for fn in filenames)
last_fn = None
last_print = 0
while 1:
# print last_stats
changed = False
for filename in filenames:
if _print_if_needed(filename, last_stats, no_fn, last_fn):
changed = True
last_fn = filename
if changed:
if idle > 0:
last_print = time.time()
else:
if idle > 0 and last_print is not None:
if time.time() - last_print >= idle:
last_print = None
print '\n' * 4
time.sleep(interval)
if '__main__' == __name__:
from optparse import OptionParser
op = OptionParser()
op.add_option('-F', '--no-fn', help="don't print filename when changes",
default=False, action='store_true')
op.add_option('-i', '--idle', help='idle time, in seconds (0 turns off)',
type='int', default=10)
op.add_option('--interval', help='check interval, in seconds', type='int',
default=1)
opts, args = op.parse_args()
try:
multi_tail(args, interval=opts.interval, idle=opts.idle,
no_fn=opts.no_fn)
except KeyboardInterrupt:
pass
Hello i am using the subprocess.Popen() class and i succesful execute commands on the terminal, but when i try to execute programs for example an script written on Python and i try to pass arguments the system fails.
This is the code:
argPath = "test1"
args = open(argPath, 'w')
if self.extract.getByAttr(self.block, 'name', 'args') != None:
args.write("<request>"+self.extract.getByAttr(self.block, 'name', 'args')[0].toxml()+"</request>")
else:
args.write('')
car = Popen(shlex.split('python3.1 /home/hidura/webapps/karinapp/Suite/ForeingCode/saveCSS.py', stdin=args, stdout=subprocess.PIPE, stderr=subprocess.PIPE))
args.close()
dataOut = car.stdout.read().decode()
log = car.stderr.read().decode()
if dataOut!='':
return dataOut.split('\n')
elif log != '':
return log.split('\n')[0]
else:
return None
And the code from the saveCSS.py
from xml.dom.minidom import parseString
import os
import sys
class savCSS:
"""This class has to save
the changes on the css file.
"""
def __init__(self, args):
document = parseString(args)
request = document.firstChild
address = request.getElementsByTagName('element')[0]
newdata = request.getElementsByTagName('element')[1]
cssfl = open("/webapps/karinapp/Suite/"+address.getAttribute('value'), 'r')
cssData = cssfl.read()
cssfl.close()
dataCSS = ''
for child in newdata.childNodes:
if child.nodeType == 3:
dataCSS += child.nodeValue
nwcssDict = {}
for piece in dataCSS.split('}'):
nwcssDict[piece.split('{')[0]] = piece.split('{')[1]
cssDict = {}
for piece in cssData.split('}'):
cssDict[piece.split('{')[0]] = piece.split('{')[1]
for key in nwcssDict:
if key in cssDict == True:
del cssDict[key]
cssDict[key] = nwcssDict[key]
result = ''
for key in cssDict:
result += key+"{"+cssDict[key]+"}"
cssfl = open(cssfl.name, 'a')
cssfl.write(result)
cssfl.close()
if __name__ == "__main__":
savCSS(sys.stdin)
BTW: There's no output...
Thanks in advance.
OK, I'm ignoring that your code doesn't run (neither the script you try to execute, not the main script actually works), and looking at what you are doing:
It does execute the script, or you would get an error, like "bin/sh: foo: not found".
Also you seem to be using an open file as stdin after you have written to it. That doesn't work.
>>> thefile = open('/tmp/foo.txt', 'w')
>>> thefile.write("Hej!")
4
>>> thefile.read()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IOError: not readable
You need to close the file, and reopen it as a read file. Although better in this case would be to use StringIO, I think.
To talk to the subprocess, you use communicate(), not read() on the pipes.
I'm not sure why you are using shell=True here, it doesn't seem necessary, I would remove it if I was you, it only complicates stuff unless you actually need the shell to do things.
Specifically you should not split the command into a list when using shell=True. What your code is actually doing, is starting a Python prompt.
You should rather use communicate() instead of .stdout.read().
And the code you posted isn't even correct:
Popen(shlex.split('python3.1 /home/hidura/webapps/karinapp/Suite/ForeingCode/saveCSS.py', stdin=args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
There's a missing parenthesis, and from the stdout/stderr parameters, it's clear that you get no output to the console, but rather into pipes (if that's what you meant by "There's no output...").
Your code will actually work on Windows, but on Linux you must remove the shell=True parameter. You should always omit that parameter if you provide the full command line yourself (as a sequence).
This question already has answers here:
How to check if there exists a process with a given pid in Python?
(15 answers)
Closed 9 years ago.
The only nice way I've found is:
import sys
import os
try:
os.kill(int(sys.argv[1]), 0)
print "Running"
except:
print "Not running"
(Source)
But is this reliable? Does it work with every process and every distribution?
Mark's answer is the way to go, after all, that's why the /proc file system is there. For something a little more copy/pasteable:
>>> import os.path
>>> os.path.exists("/proc/0")
False
>>> os.path.exists("/proc/12")
True
on linux, you can look in the directory /proc/$PID to get information about that process. In fact, if the directory exists, the process is running.
It should work on any POSIX system (although looking at the /proc filesystem, as others have suggested, is easier if you know it's going to be there).
However: os.kill may also fail if you don't have permission to signal the process. You would need to do something like:
import sys
import os
import errno
try:
os.kill(int(sys.argv[1]), 0)
except OSError, err:
if err.errno == errno.ESRCH:
print "Not running"
elif err.errno == errno.EPERM:
print "No permission to signal this process!"
else:
print "Unknown error"
else:
print "Running"
I use this to get the processes, and the count of the process of the specified name
import os
processname = 'somprocessname'
tmp = os.popen("ps -Af").read()
proccount = tmp.count(processname)
if proccount > 0:
print(proccount, ' processes running of ', processname, 'type')
Here's the solution that solved it for me:
import os
import subprocess
import re
def findThisProcess( process_name ):
ps = subprocess.Popen("ps -eaf | grep "+process_name, shell=True, stdout=subprocess.PIPE)
output = ps.stdout.read()
ps.stdout.close()
ps.wait()
return output
# This is the function you can use
def isThisRunning( process_name ):
output = findThisProcess( process_name )
if re.search('path/of/process'+process_name, output) is None:
return False
else:
return True
# Example of how to use
if isThisRunning('some_process') == False:
print("Not running")
else:
print("Running!")
I'm a Python + Linux newbie, so this might not be optimal. It solved my problem, and hopefully will help other people as well.
But is this reliable? Does it work with every process and every distribution?
Yes, it should work on any Linux distribution. Be aware that /proc is not easily available on Unix based systems, though (FreeBSD, OSX).
Seems to me a PID-based solution is too vulnerable. If the process you're trying to check the status of has been terminated, its PID can be reused by a new process. So, IMO ShaChris23 the Python + Linux newbie gave the best solution to the problem. Even it only works if the process in question is uniquely identifiable by its command string, or you are sure there would be only one running at a time.
i had problems with the versions above (for example the function found also part of the string and such things...)
so i wrote my own, modified version of Maksym Kozlenko's:
#proc -> name/id of the process
#id = 1 -> search for pid
#id = 0 -> search for name (default)
def process_exists(proc, id = 0):
ps = subprocess.Popen("ps -A", shell=True, stdout=subprocess.PIPE)
ps_pid = ps.pid
output = ps.stdout.read()
ps.stdout.close()
ps.wait()
for line in output.split("\n"):
if line != "" and line != None:
fields = line.split()
pid = fields[0]
pname = fields[3]
if(id == 0):
if(pname == proc):
return True
else:
if(pid == proc):
return True
return False
I think it's more reliable, easier to read and you have the option to check for process ids or names.
Sligtly modified version of ShaChris23 script. Checks if proc_name value is found within process args string (for example Python script executed with python ):
def process_exists(proc_name):
ps = subprocess.Popen("ps ax -o pid= -o args= ", shell=True, stdout=subprocess.PIPE)
ps_pid = ps.pid
output = ps.stdout.read()
ps.stdout.close()
ps.wait()
for line in output.split("\n"):
res = re.findall("(\d+) (.*)", line)
if res:
pid = int(res[0][0])
if proc_name in res[0][1] and pid != os.getpid() and pid != ps_pid:
return True
return False