Resolve dns (edns) with client subnet option in Python - python

I'm looking for an implementation in Python that would allow me to resolve a DNS address using an extension of DNS (EDNS) "client sub options" . This option allows better DNS-resolution for content delivery systems - and ultimately, faster internet routing. The motivation is better explained here: http://www.afasterinternet.com/howitworks.htm
another name for this is "vandergaast-edns-client-subnet"
an implementation for dig is available here:
https://www.gsic.uva.es/~jnisigl/dig-edns-client-subnet.html
I'm looking for a python implementation that would do the same.

I'm the developer/maintainer of dnspython-clientsubnet. It is designed to be used in your code as an additive to dnspython. I've just released version 2.0.0 (after trying to do what you wanted) which makes everything much easier
pip install clientsubnetoption (works for both Python2 & Python3)
Import clientsubnetoption and dependancies you'll need:
import dns
import clientsubnetoption
Setup your ClientSubnetOption with the information you want:
cso = clientsubnetoption.ClientSubnetOption('1.2.3.4')
Create your DNS packet:
message = dns.message.make_query('google.com', 'A')
Add the edns option:
message.use_edns(options=[cso])
Use message to make your query:
r = dns.query.udp(message, '8.8.8.8')
Option information is now at r.options and there can be multiple, so you may need to iterate through them to find the ClientSubnetOption object.
for options in r.options:
if isinstance(options, ClientSubnetOption):
# do stuff here
pass
The code in clientsubnetoption.py is there to act as a unit test and a testing tool for support of edns-clientsubnet, not because you have to use it that way.

A python implementation exists:
its an extension of dnspython (http://www.dnspython.org/) that can be found here: https://github.com/opendns/dnspython-clientsubnetoption
pip install dnspython
git clone the repo from github
use this command:
python clientsubnetoption.py (name-server) (host to query>) -s (client-ip) -m 32
Note that the repo does not actually print the results. its just a tester, so it just emits "success" or "failure". To get the actual results you'll need to modify the python code to print the response from the DNS server.

Related

Python "Send2Pd" doesn't reach Pure Data's "Netreceive"

I need to send messages from Python to Pure Data, so I followed this article.
It was working fine until one day suddenly it stopped working.
Pure Data doesn't receive anything anymore and I have tried this on Mac and Linux environment
I have uploaded script plus patch here:
but a equivalent code is:
import os
os.system("echo '1;' | pdsend 3000")
and Puredata should receive the message with a simple
netreceive 3000
Looks like Python and Pure Data can't find a path to communicate.
after i tried so change pdsend with it's absoulute path as follow:
import os
os.system("echo '1;' | /Users/path_to_pure_data/PureData.app/Contents/Resources/bin/pdsend 3000")
it works.
at this point i don't know why Python doesn't automatically find it's path, and how to fix it.
How would Python know that it should look for executables in /Users/path_to_pure_data/PureData.app/Contents/Resources/bin/?
For security (and perfomance) reasons, the system will only search for executables in a few select locations, that are defined in the PATH environment variable.
You could either add /Users/path_to_pure_data/PureData.app/Contents/Resources/bin/ to your PATH, or put pdsend into a path that is already searched for.
However, this seems to be an overkill in any case, as calling an external application is rather costly, and using os.system is probably one of the worst options.
Python is a powerful programming language, and it is very easy to build the client code in Python itself.
Eg this code does not have any external dependencies, performs faster, and is safer (and I'd say. it's easier to read as well):
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("localhost", 3000))
s.sendall(b'1;')
s.close()

Using GitPython, how do I do git submodule update --init

My code so far is working doing the following. I'd like to get rid of the subprocess.call() stuff
import git
from subprocess import call
repo = git.Repo(repo_path)
repo.remotes.origin.fetch(prune=True)
repo.head.reset(commit='origin/master', index=True, working_tree=True)
# I don't know how to do this using GitPython yet.
os.chdir(repo_path)
call(['git', 'submodule', 'update', '--init'])
My short answer: it's convenient and simple.
Full answer follows. Suppose you have your repo variable:
repo = git.Repo(repo_path)
Then, simply do:
for submodule in repo.submodules:
submodule.update(init=True)
And you can do all the things with your submodule that you do with your ordinary repo via submodule.module() (which is of type git.Repo) like this:
sub_repo = submodule.module()
sub_repo.git.checkout('devel')
sub_repo.git.remote('maybeorigin').fetch()
I use such things in my own porcelain over git porcelain that I use to manage some projects.
Also, to do it more directly, you can, instead of using call() or subprocess, just do this:
repo = git.Repo(repo_path)
output = repo.git.submodule('update', '--init')
print(output)
You can print it because the method returns output that you usually get by runnning git submodule update --init (obviously the print() part depends on Python version).
Short answer: You can’t.
Full answer: You can’t, and there is also no point. GitPython is not a complete implementation of the whole Git. It just provides a high-level interface to some common things. While a few operations are implemented directly in Python, a lot calls actually use the Git command line interface to process stuff.
Your fetch line for example does this. Under the hood, there is some trick used to make some calls look like Python although they call the Git executable to process the result—using subprocess as well.
So you could try to figure out how to use the git cmd interface GitPython offers works to support those calls (you can access the instance of that cmd handler using repo.git), or you just continue using the “boring” subprocess calls directly.

KVM api to start virtual machine

I was wondering if there is a KVM API which allows you to start a KVM virtual machine using a simple command, from a python script.
My Python script performs a series of checks to see whether or not we need to start a specific VM, and I would like to start a VM if I need to.
All I need now is to find the API calls, but I can't find a simple call to start them within the libvirt website. Does anybody know if this is possible?
You can use the create() function from the python API bindings of libvirt:
import libvirt
#connect to hypervisor running on localhost
conn = libvirt.open('qemu:///system')
dom0 = conn.lookupByName('my-vm-1')
dom0.create()
basically the python API is the C API, called by libvirt.C_API_CALL minus the virConnect part or conn.C_API_CALL minus the virDomain part.
see the libvirt API create call and here.
The simplest way, though probably not the best recommended way is to use the os.system using python to invoke qemu-kvm.
This method will have the disadvantage that you will have to manually manage the VM.
Using libvirt, you will first have to define a domain by calling virt-install.
virt-install \
--connect qemu:///system \
--virt-type kvm \
--name MyNewVM \
--ram 512 \
--disk path=/var/lib/libvirt/images/MyNewVM.img,size=8 \
--vnc \
--cdrom /var/lib/libvirt/images/Fedora-14-x86_64-Live-KDE.iso \
--network network=default,mac=52:54:00:9c:94:3b \
--os-variant fedora14
I have picked this directly from http://wiki.libvirt.org/page/VM_lifecycle
Once you create the domain, you can use virsh start MyNewVM to start the VM. Using this method, it is much easier to manage the VM.
Seems like using libvirt or calling [qemu-]kvm command are the two alternatives for pythonistas. May be you could find interesting snippets in kvmtools project code: http://www.linux-kvm.org/page/Kvmtools (see ./kvmtools/kvm/build_command.py and kvm_boot_action in ./kvmtools/kvm/action.py making use of subprocess module instead of os.system)
you can use virsh commands if you need to manage your KVM.
here is the list of virsh commands;
list deleted because it was not coming in proper format
you can use the help from virsh to list all the options, there start option might help you.
if you are using the python script for managing you KVM I would suggest to go through the following script as well. it will provide you a good idea. http://russell.ballestrini.net/series/virt-back/

Python client library for WebDAV

I'd like to implement a piece of functionality in my application that uploads and manipulates files on a WebDAV server. I'm looking for a mature Python library that would give an interface similar to the os.* modules for working with the remote files. Googling has turned up a smattering of options for WebDAV in Python, but I'd like to know which are in wider use these days.
It's sad that for this question ("What Python webdav library to use?"), which for sure interests more than one person, unrelated answer was accepted ("don't use Python webdav library"). Well, common problem on Stackexchange.
For people who will be looking for real answers, and given the requirements in the original question (simple API similar to "os" module), I may suggest easywebdav, which has very easy API and even nice and simple implementation, offering upload/download and few file/dir management methods. Due to simple implementation, it so far doesn't support directory listing, but bug for that was filed, and the author intends to add it.
I just had a similar need and ended up testing a few Python WebDAV clients for my needs (uploading and downloading files from a WebDAV server). Here's a summary of my experience:
1) The one that worked for me is python-webdav-lib.
Not much documentation, but a quick look at the code (in particular the example) was enough to figure out how to make it work for me.
2) PyDAV 0.21 (the latest release I found) doesn't work with Python 2.6 because it uses strings as exceptions. I didn't try to fix this, expecting further incompatibilities later on.
3) davclient 0.2.0. I looked at it but didn's explore any further because the documentation didn't mention the level of API I was looking for (file upload and download).
4) Python_WebDAV_Library-0.3.0. Doesn't seem to have any upload functionality.
Apparently you're looking for a WebDAV client library.
Not sure how the gazillion hits came up, it seems the following 2 looks relevant:
PyDAV:
http://users.sfo.com/~jdavis/Software/PyDAV/readme.html#client
Zope - and look for client.py
import easywebdav
webdav = easywebdav.connect(
host='dav.dumptruck.goldenfrog.com',
username='_snip_',
port=443,
protocol="https",
password='_snip_')
_file = "test.py"
print webdav.cd("/dav/")
# print webdav._get_url("")
# print webdav.ls()
# print webdav.exists("/dav/test.py")
# print webdav.exists("ECS.zip")
# print webdav.download(_file, "./"+_file)
print webdav.upload("./test.py", "test.py")
I have no experience with any of these libraries, but the Python Package Index ("PyPi") lists quite a few webdav modules.
Install:
$ sudo apt-get install libxml2-dev libxslt-dev python-dev
$ sudo apt-get install libcurl4-openssl-dev python-pycurl
$ sudo easy_install webdavclient
Examples:
import webdav.client as wc
options = {
'webdav_hostname': "https://webdav.server.ru",
'webdav_login': "login",
'webdav_password': "password"
}
client = wc.Client(options)
client.check("dir1/file1")
client.info("dir1/file1")
files = client.list()
free_size = client.free()
client.mkdir("dir1/dir2")
client.clean("dir1/dir2")
client.copy(remote_path_from="dir1/file1", remote_path_to="dir2/file1")
client.move(remote_path_from="dir1/file1", remote_path_to="dir2/file1")
client.download_sync(remote_path="dir1/file1", local_path="~/Downloads/file1")
client.upload_sync(remote_path="dir1/file1", local_path="~/Documents/file1")
client.download_async(remote_path="dir1/file1", local_path="~/Downloads/file1", callback=callback)
client.upload_async(remote_path="dir1/file1", local_path="~/Documents/file1", callback=callback)
link = client.publish("dir1/file1")
client.unpublish("dir1/file1")
Links:
Source code here
Packet here
I don't know of any specifically but, depending on your platform, it may be simpler to mount and access the WebDAV-served files through the file system. There's davfs2 out there and some OS's, like Mac OS X, have WebDAV file system support built in.

Mercurial scripting with python

I am trying to get the mercurial revision number/id (it's a hash not a number) programmatically in python.
The reason is that I want to add it to the css/js files on our website like so:
<link rel="stylesheet" href="example.css?{% mercurial_revision "example.css" %}" />
So that whenever a change is made to the stylesheet, it will get a new url and no longer use the old cached version.
OR if you know where to find good documentation for the mercurial python module, that would also be helpful. I can't seem to find it anywhere.
My Solution
I ended up using subprocess to just run a command that gets the hg node. I chose this solution because the api is not guaranteed to stay the same, but the bash interface probably will:
import subprocess
def get_hg_rev(file_path):
pipe = subprocess.Popen(
["hg", "log", "-l", "1", "--template", "{node}", file_path],
stdout=subprocess.PIPE
)
return pipe.stdout.read()
example use:
> path_to_file = "/home/jim/workspace/lgr/pinax/projects/lgr/site_media/base.css"
> get_hg_rev(path_to_file)
'0ed525cf38a7b7f4f1321763d964a39327db97c4'
It's true there's no official API, but you can get an idea about best practices by reading other extensions, particularly those bundled with hg. For this particular problem, I would do something like this:
from mercurial import ui, hg
from mercurial.node import hex
repo = hg.repository('/path/to/repo/root', ui.ui())
fctx = repo.filectx('/path/to/file', 'tip')
hexnode = hex(fctx.node())
Update At some point the parameter order changed, now it's like this:
repo = hg.repository(ui.ui(), '/path/to/repo/root' )
Do you mean this documentation?
Note that, as stated in that page, there is no official API, because they still reserve the right to change it at any time. But you can see the list of changes in the last few versions, it is not very extensive.
An updated, cleaner subprocess version (uses .check_output(), added in Python 2.7/3.1) that I use in my Django settings file for a crude end-to-end deployment check (I dump it into an HTML comment):
import subprocess
HG_REV = subprocess.check_output(['hg', 'id', '--id']).strip()
You could wrap it in a try if you don't want some strange hiccup to prevent startup:
try:
HG_REV = subprocess.check_output(['hg', 'id', '--id']).strip()
except OSError:
HG_REV = "? (Couldn't find hg)"
except subprocess.CalledProcessError as e:
HG_REV = "? (Error {})".format(e.returncode)
except Exception: # don't use bare 'except', mis-catches Ctrl-C and more
# should never have to deal with a hangup
HG_REV = "???"
If you are using Python 2, you want to use hglib.
I don't know what to use if you're using Python 3, sorry. Probably hgapi.
Contents of this answer
Mercurial's APIs
How to use hglib
Why hglib is the best choice for Python 2 users
If you're writing a hook, that discouraged internal interface is awfully convenient
Mercurial's APIs
Mercurial has two official APIs.
The Mercurial command server. You can talk to it from Python 2 using the hglib (wiki, PyPI) package, which is maintained by the Mercurial team.
Mercurial's command-line interface. You can talk to it via subprocess, or hgapi, or somesuch.
How to use hglib
Installation:
pip install python-hglib
Usage:
import hglib
client = hglib.open("/path/to/repo")
commit = client.log("tip")
print commit.author
More usage information on the hglib wiki page.
Why hglib is the best choice for Python 2 users
Because it is maintained by the Mercurial team, and it is what the Mercurial team recommend for interfacing with Mercurial.
From Mercurial's wiki, the following statement on interfacing with Mercurial:
For the vast majority of third party code, the best approach is to use Mercurial's published, documented, and stable API: the command line interface. Alternately, use the CommandServer or the libraries which are based on it to get a fast, stable, language-neutral interface.
From the command server page:
[The command server allows] third-party applications and libraries to communicate with Mercurial over a pipe that eliminates the per-command start-up overhead. Libraries can then encapsulate the command generation and parsing to present a language-appropriate API to these commands.
The Python interface to the Mercurial command-server, as said, is hglib.
The per-command overhead of the command line interface is no joke, by the way. I once built a very small test suite (only about 5 tests) that used hg via subprocess to create, commit by commit, a handful of repos with e.g. merge situations. Throughout the project, the runtime of suite stayed between 5 to 30 seconds, with nearly all time spent in the hg calls.
If you're writing a hook, that discouraged internal interface is awfully convenient
The signature of a Python hook function is like so:
# In the hgrc:
# [hooks]
# preupdate.my_hook = python:/path/to/file.py:my_hook
def my_hook(
ui, repo, hooktype,
... hook-specific args, find them in `hg help config` ...,
**kwargs)
ui and repo are part of the aforementioned discouraged unofficial internal API. The fact that they are right there in your function args makes them terribly convenient to use, such as in this example of a preupdate hook that disallows merges between certain branches.
def check_if_merge_is_allowed(ui, repo, hooktype, parent1, parent2, **kwargs):
from_ = repo[parent2].branch()
to_ = repo[parent1].branch()
...
# return True if the hook fails and the merge should not proceed.
If your hook code is not so important, and you're not publishing it, you might choose to use the discouraged unofficial internal API. If your hook is part of an extension that you're publishing, better use hglib.
give a try to the keyword extension
FWIW to avoid fetching that value on every page/view render, I just have my deploy put it into the settings.py file. Then I can reference settings.REVISION without all the overhead of accessing mercurial and/or another process. Do you ever have this value change w/o reloading your server?
I wanted to do the same thing the OP wanted to do, get hg id -i from a script (get tip revision of the whole REPOSITORY, not of a single FILE in that repo) but I did not want to use popen, and the code from brendan got me started, but wasn't what I wanted.
So I wrote this... Comments/criticism welcome. This gets the tip rev in hex as a string.
from mercurial import ui, hg, revlog
# from mercurial.node import hex # should I have used this?
def getrepohex(reporoot):
repo = hg.repository(ui.ui(), reporoot)
revs = repo.revs('tip')
if len(revs)==1:
return str(repo.changectx(revs[0]))
else:
raise Exception("Internal failure in getrepohex")

Categories

Resources