server name reachability in dnspython - python

I am currently trying to find a way to check whether or not the name servers can respond to either TCP or UDP packets.
My idea behind that was, to get all the name servers from a website (for example google.com), store them in a list, and then try to send TCP and UDP messages to all of them.
Although I am getting the name servers, my interpreter shows a problem when I am trying to make a query on udp(check udpPacket on the code) saying:
"TypeError: coercing to Unicode: need string or buffer, NS found"
I am new in Python(coming from C and C++) and I am guessing this is just incompatible types.
I checked dnspython's documentation and could not find what kind of type NS is (probably it's a type by itself) and why it cannot be passed as an argument.
What do you think the problem is? Is there maybe a better way to solve that kind of problem?
def getNSResults(url):
#create an empty list where we can store all the nameservers we found
nameServers = []
nameServers = dns.resolver.query(url,dns.rdatatype.NS, raise_on_no_answer=False)
#create a dictionary where based on all the nameservers.
#1st label refers to the ns name of our url that we inserted.
#2nd label shows wether or not we received a UDP response or not.
#3rd label shows wether or not we received a TCP response or not.
results = {}
for nameServer in nameServers:
#make a dns ns query, acts as a dumb message since whatever we send we just care of what we get back
query = dns.message.make_query(dns.name.from_text(url), dns.rdatatype.ANY)
query.flags |= dns.flags.AD
query.find_rrset(query.additional, dns.name.root, 65535, dns.rdatatype.OPT, create=True, force_unique=True)
#try sending a udp packet to see if it's listening on UDP
udpPacket = dns.query.udp(query,nameServer)
#try sending a tcp packet to see if it's listening on TCP
tcpPacket = dns.query.tcp(None,nameServer)
#add the results in a dictionary and return it, to be checked later by the user.
results.update({"nsName" == nameServer, "receivedUDPPacket" == isNotNone(udpPacket),"receivedTCPPacket" == isNotNone(tcpPacket)})
Thanks in advance!

Looking at your code, I see some DNS problems, some Python problems, and some dnspython problems. Let's see if we can't learn something together.
DNS
First, the parameter to your function getNSResults is called url. When you send DNS queries, you query for a domain name. A URL is something totally different (e.g. https://example.com/index.html). I would rename url to something like domain_name, domain, or name. For more on the difference between URLs and domain names, see https://www.copahost.com/blog/domain-vs-url/.
Second, let's talk about what you're trying to do.
i am currently trying to find a way to check wether or not the name servers can respond to either tcp or udp packets.
My idea behind that was, to get all the name servers from a website (for example google.com), store them in a list, and then, try to send tcp and udp messages to all of them.
That sounds like a great approach. I think you might be missing a few details here. so let me explain the steps you can take to do this:
Do an NS query for a domain name. You already have this step in your code. What you'll actually get from that query is just another domain name (or multiple domain names). For example, if you run dig +short NS google.com, you'll get this output:
ns3.google.com.
ns1.google.com.
ns4.google.com.
ns2.google.com.
At this step, we have a list of one or more names of authoritative servers. Now we need an IP address to use to send them queries. So we'll do a type A query for each of the names we got from step 1.
Now we have a list of IP addresses. We can send a DNS query over UDP and one over TCP to see if they're supported.
Python
For the most part, your Python syntax is okay.
The biggest red flag I see is the following code:
results.update({"nsName" == nameServer,
"receivedUDPPacket" == isNotNone(udpPacket),
"receivedTCPPacket" == isNotNone(tcpPacket)})
Let's break this down a bit.
First, you have results, which is a dict.
Then you have this:
{"nsName" == nameServer,
"receivedUDPPacket" == isNotNone(udpPacket),
"receivedTCPPacket" == isNotNone(tcpPacket)}
which is a set of bools.
What I think you meant to do was something like this:
results.update({
"nsName": nameServer,
"receivedUDPPacket": true,
"receivedTCPPacket": true
})
Function and variables names in Python are usually written in lowercase, with words separated by underscores (e.g. my_variable, def my_function()). Class names are usually upper camel case (e.g. class MyClass).
None of this is required, you can name your stuff however you want, plenty of super popular libraries and builtins break this convention, just figured I'd throw it out there because it can be helpful when reading Python code.
dnspython
When you're not sure about the types of things, or what attributes things have, remember these four friends, all builtin to Python:
1. pdb
2. dir
3. type
4. print
pdb is a Python debugger. Just import pdb, and the put pdb.set_trace() where you want to break. Your code will stop there, and then you can check out the values of all the variables.
dir will return the attributes and methods of whatever you pass to it. Example: print(dir(udpPacket)).
type will return the type of an object.
print as you probably already know, will print out stuff so you can see it.
I'm going to leave this part for you to test out.
Run dir() on everything if you don't know what it is.
I also should probably mention help(), which is super useful for built-in stuff.
The summary for this section is that sometimes documentation isn't all there, or hard to find, especially when you're new to a language/library/whatever.
So you have to figure stuff out on your own, and that means using all the tools I've just mentioned, looking at the source code, things like that.
Summary
I hope this was helpful. I know it's a lot, it's probably too much, but just be patient and know that DNS and Python are some very useful and fun things to learn about.
I went ahead and wrote something up that is a start at what I think you're hoping to achieve.
I recommend walking through the whole thing and making sure you understand what's going on.
If you don't understand something, remember pdb and dir (and there's always Google, SO, etc).
import dns.resolver
import dns.message
import dns.rdatatype
import json
import sys
def check_tcp_and_udp_support(name):
# this will give me the first default system resolver from /etc/resolv.conf
# (or Windows registry)
where = dns.resolver.Resolver().nameservers[0]
q = dns.message.make_query(name, dns.rdatatype.NS)
ns_response = dns.query.udp(q, where)
ns_names = [t.target.to_text() for ans in ns_response.answer for t in ans]
# this code is the same as the one-liner above
# ns_names = []
# for ans in ns_response.answer:
# for t in ans:
# ns_names.append(t.target.to_text())
results = {}
for ns_name in ns_names:
# do type A lookup for nameserver
q = dns.message.make_query(ns_name, dns.rdatatype.A)
response = dns.query.udp(q, where)
nameserver_ips = [item.address for ans in response.answer for item in ans.items if ans.rdtype == dns.rdatatype.A]
# now send queries to the nameserver IPs
for nameserver_ip in nameserver_ips:
q = dns.message.make_query('example.com.', dns.rdatatype.A)
try:
udp_response = dns.query.udp(q, nameserver_ip)
supports_udp = True
except dns.exception.Timeout:
supports_udp = False
try:
tcp_response = dns.query.tcp(q, nameserver_ip)
supports_tcp = True
except dns.exception.Timeout:
supports_tcp = True
results[nameserver_ip] = {
'supports_udp': supports_udp,
'supports_tcp': supports_tcp
}
return results
def main():
results = check_tcp_and_udp_support('google.com')
# this is just fancy JSON printing
# you could do print(results) instead
json.dump(results, sys.stdout, indent=4)
if __name__ == '__main__':
main()
Again, I hope this is helpful. It's hard when I don't know exactly what's going on in your head, but this is what I've got for you.

Related

How to send a Python dictionary consisting of ndarray and None using ZeroMQ PUB/SUB?

I'm trying to pass a python dictionary of 3 images (stored as ndarray) using ZeroMQ to pass it to another program as a consumer and parse back the data to original form. Followed three ways, but couldn't achieve success in anyone of the ways.
Below is the sample minimal reproduced code:
import pickle
import zmq
# Adding ZMQ Context
def zmq_context():
# Creating ZMQ context starts
context = zmq.Context()
footage_socket = context.socket(zmq.PUB)
footage_socket.connect('tcp://localhost:5002')
return footage_socket
wagon_dtl, ctr1_dtl, ctr2_dtl = NdArrays of images
socket_ctx = zmq_context()
# Trying two different ways of formatting the image before creating the dict, the below approach works for all three ndarrays
# 1st way
wagon_dtl = image_np_save # image_np_save is the original image
# 2nd way (this I tried because an ndarray isn't JSON serializable)
encoded, buffer = cv2.imencode('.jpg', image_np_save)
wagon_dtl = base64.b64encode(buffer)
if cond == "fulfilled":
full_wgn_dict = {"wagon_dtl": wagon_dtl, "ctr1_dtl": ctr1_dtl, "ctr2_dtl": ctr2_dtl}
# 1st way
dict_as_b64text = base64.b64encode(full_wgn_dict)
socket_ctx.send(dict_as_b64text)
# 2nd way
myjson = json.dumps(full_wgn_dict)
socket_ctx.send(myjson)
# 3rd way
dict_as_text = pickle.dumps(full_wgn_dict).encode('base64', 'strict')
socket_ctx.send(dict_as_text)
How to solve this?
I've followed these Q/As while working on this solution: 1, 2, 3, 4, 5
Q : "How to send a Python dictionary consisting of ndarray and None using ZeroMQ PUB/SUB?"
Easy, one may best use the ready-made .send_pyobj()-method for doing right this.
The sending side,the PUB shall be doing a call to the socket.send_pyobj( full_wgn_dict )-method, and that's basically all on this side.
A receiving side,each of the potential SUB-s shall reuse the .recv_pyobj()-method.
Yet all the SUB-s have to also do one more step, to actively subscribe to receive any message at all.
For details on socket.setsockopt( zmq.SUBSCRIBE, "" ) see the ZeroMQ documented API, or do not hesitate to sip from many examples here.
Some additional tricks (not needed for trivial dict-s) may help with the pickle-phase of the SER/DES stage. Yet these go way beyond of the scope of this Question, and may introduce advantages in controlled environments but problems in open, uncontrolled environments, where you lack zero chances to meet the required prerequisites - in my apps, I prefer to use import dill as pickle for having way higher robustness of the pickle.dumps()-SER/DES processing of objects and many more advances, like storing a full-session snapshot. Credits go to #MikeMcKearns
Feel free to re-read the documentation present for all syntax-related details in the __doc__ strings:
>>> print zmq.Socket.send_pyobj.__doc__
Send a Python object as a message using pickle to serialize.
Parameters
----------
obj : Python object
The Python object to send.
flags : int
Any valid flags for :func:`Socket.send`.
protocol : int
The pickle protocol number to use. The default is pickle.DEFAULT_PROTOCOL
where defined, and pickle.HIGHEST_PROTOCOL elsewhere.
>>> print zmq.Socket.recv_pyobj.__doc__
Receive a Python object as a message using pickle to serialize.
Parameters
----------
flags : int
Any valid flags for :func:`Socket.recv`.
Returns
-------
obj : Python object
The Python object that arrives as a message.
Raises
------
ZMQError
for any of the reasons :func:`~Socket.recv` might fail

Modifying HTTPS response packet on the fly with mitmproxy

I am trying to implement an mitmproxy addon script, in order to tamper with a particular https packet data - which is by the way decrypted on the fly through mitmproxy's certificate injection.
I am following this Stack Overflow answer to a rather similar question, as well as this tutorial from the mitmproxy docs, but without any success so far.
The packet I'm targeting comes from https://api.example.com/api/v1/user/info.
Now here is the whole python script I wrote so as to tamper with this packet data, based upon the aforementioned sources :
from mitmproxy import ctx
class RespModif:
def _init_(self):
self.num = 0
def response(self, flow):
ctx.log.info("RespModif triggered")
if flow.request.url == "https://api.example.com/api/v1/user/info":
ctx.log.info("RespModif triggered -- In the if statement")
self.num = self.num + 1
ctx.log.info("RespModif -- Proceeded to %d response modifications "
"of the targetted URLs" % self.num)
addons = [
RespModif()
]
Checking out the events log, I'm able to see that the first log information ("RespModif triggered") is being reported onto the log, but the two other log infos (done from inside the if statement) are never reported, which means I think that the if statement does never succeed.
Is there something wrong with my code ?
How can I get the if statement to succeed ?
PS: The target URL is definitely correct, plus I'm using it with a registered account from the client application that is being sniffed with mitmproxy.
Have you tried to use pretty_url attribute ?
Something like :
if flow.request.pretty_url == "https://api.example.com/api/v1/user/info":
....
pretty_url attribute handles full domain name whereas url only deals with corresponding ip address.
Also logging the content of pretty_url should allow to see what exact URL is going through and give more visibility as to what the code is actually doing.

Pyro requests goes to the wrong nameserver (when there are multiple ones)

I have the following case:
Two servers running each their own name server (NS)
Each of these servers has the same object registering with a different URI. The URI includes the local server's hostname to make it unique
A third server, the client, tries to target the right server to query information available only that server
Please note that all of these 3 servers can communicate with each other.
My questions and issues:
The requests from the third server always goes to the first server, no matter what; except when I shutdown the first NS. Is there something definitely wrong with that I'm doing? I guess I do, but I can't figure it out...
Is running separate nameservers the root cause? What would be the alternative if this not allowed? I run multiple name servers for redundancy as some other upcoming operations can run on any of the two first servers. When I list the content of each name server (locally on each server), I get the right registration (which includes the hostname).
Is the use of Pyro4.config.NS_HOST parameter wrong, see below the usage in the code? What would be the alternative?
My configuration:
Pyro 4-4.63-1.1
Python 2.7.13
Linux OpenSuse (kernel version 4.4.92)
The test code is listed below. I got rid of the details like try blocks and imports, etc...
My server skeleton code (which runs on the first two servers):
daemon = Pyro4.Daemon(local_ip_address)
ns = Pyro4.locateNS()
uri = daemon.register(TestObject())
ns.register("test.object#%s" % socket.gethostname(), uri)
daemon.requestLoop()
The local_ip_address is the one supplied below by the user to contact the correct name server (on the correct server).
The name server is started on each of the first tow servers as follows:
python -m Pyro4.naming -n local_ip_address
The local_ip_address is the same as above.
My client skeleton code (which runs on the third server):
target_server_hostname = user_provided_hostname
target_server_ip = user_provided_ip
Pyro4.config.NS_HOST = target_server_ip
uri = "test.object#%s" % target_server_hostname
proxy = Pyro4.Proxy(uri)
proxy._pyroTimeout = my_timeout
proxy._pyroMaxRetries = my_retries
rc, reason = proxy.testAction(target_server_hostname)
if rc != 0:
print reason
else:
print "Hostname matches"
If more information is required, please let me know.
Djurdjura.
I think figured it out. Hope this will be useful to anyone else looking for a similar use case.
You just need to specify where to look for the name server itself. The client code becomes something like the following:
target_server_hostname = user_provided_hostname
target_server_ip = user_provided_ip
# The following statement finds the correct name server
ns = Pyro4.locateNS(host=target_server_ip)
name = "test.object#%s" % target_server_hostname
uri = ns.lookup(name)
proxy = Pyro4.Proxy(uri)
proxy._pyroTimeout = my_timeout
proxy._pyroMaxRetries = my_retries
rc, reason = proxy.testAction(target_server_hostname)
if rc != 0:
print reason
else:
print "Hostname matches"
In my case, I guess the alternative would be using a single common name server running on ... the third server (the client server). This server is always on and ready before the other ones. I didn't try this approach one yet.
Regards.
D.
PS. Thanks Irmen for your answer.

python requests...or something else... mysteriously caching? hashes don't change right when file does

I have a very odd bug. I'm writing some code in python3 to check a url for changes by comparing sha256 hashes. The relevant part of the code is as follows:
from requests import get
from hashlib import sha256
def fetch_and_hash(url):
file = get(url, stream=True)
f = file.raw.read()
hash = sha256()
hash.update(f)
return hash.hexdigest()
def check_url(target): # passed a dict containing hash from previous examination of url
t = deepcopy(target)
old_hash = t["hash"]
new_hash = fetch_and_hash(t["url"])
if old_hash == new_hash:
t["justchanged"] = False
return t
else:
t["justchanged"] = True
return handle_changes(t, new_hash) # records the changes
So I was testing this on an old webpage of mine. I ran the check, recorded the hash, and then changed the page. Then I re-ran it a few times, and the code above did not reflect a new hash (i.e., it followed the old_hash == new_hash branch).
Then I waited maybe 5 minutes and ran it again without changing the code at all except to add a couple of debugging calls to print(). And this time, it worked.
Naturally, my first thought was "huh, requests must be keeping a cache for a few seconds." But then I googled around and learned that requests doesn't cache.
Again, I changed no code except for print calls. You might not believe me. You might think "he must have changed something else." But I didn't! I can prove it! Here's the commit!
So what gives? Does anyone know why this is going on? If it matters, the webpage is hosted on a standard commercial hosting service, IIRC using Apache, and I'm on a lousy local phone company DSL connection---don't know if there are any serverside caching settings going on, but it's not on any kind of CDN.
So I'm trying to figure out whether there is some mysterious ISP cache thing going on, or I'm somehow misusing requests... the former I can handle; the latter I need to fix!

Making all attributes and methods available for a socket server in Python

I use a Raspberry Pi to collect sensor data and set digital outputs, to make it easy for other applications to set and get values I'm using a socket server. But I am having some problems finding an elegant way of making all the data available on the socket server without having to write a function for each data type.
Some examples of values and methods I have that I would like to make available on the socket server:
do[2].set_low() # set digital output 2 low
do[2].value=0 # set digital output 2 low
do[2].toggle() # toggle digital output 2
di[0].value # read value for digital input 0
ai[0].value # read value for analog input 0
ai[0].average # get the average calculated value for analog input 0
ao[4].value=255 # set analog output 4 to byte value 255
ao[4].percent=100 # set analog output 4 to 100%
I've tried eval() and exec():
self.request.sendall(str.encode(str(eval('item.' + recv_string)) + '\n'))
eval() works unless I am using equal sign (=), but I'm not to happy about the solution because of dangers involved. exec() does the work but does not return any value, also dangerous.
I've also tried getattr():
recv_string = bytes.decode(self.data).lower().split(';')
values = getattr(item, recv_string[0])
self.request.sendall(str.encode(str(values[int(recv_string[1])].value) + '\n'))
^^^^^
This works for getting my attributes, and the above example works for getting the value of the attribute I am getting with getattr(). But I can not figure out how to use getattr() on the value attribute as well.
The semi-colon (;) is used to split the incoming command, I've experimented with multiple ways of formatting the commands:
# unit means that I want to talk to a I/O interface module,
# and the name specified which one
unit;unit_name;get;do;1
unit;unit_name;get;do[1]
unit;unit_name;do[1].value
I am free to choose the format since I am also writing the software that uses these commands. I have not yet found a good format which covers all my needs.
Any suggestions how I can write an elegant way of accessing and returning the data above? Preferably with having to add new methods to the socket server every time a new value type or method is added to my I/O ports.
Edit: This is not public, it's only available on my LAN.
Suggestions
Make your API all methods so that eval can always be used:
def value_m(self, newValue=None):
if newValue is not None:
self.value = newValue
return self.value
Then you can always do
result = str(eval(message))
self.request.sendall(str.encode(result + '\n'))
For your message, I would suggest that your messages are formatted to include the exact syntax of the command exactly so that it can be evaled as-is, e.g.
message = 'do[1].value_m()' # read a value, alternatively...
message = 'do[1].value_m(None)'
or to write
message = 'do[1].value_m(0)' # write a value
This will make it easy to keep your messages up-to-date with your API, because they must match exactly, you won't have a second DSL to deal with. You really don't want to have to maintain a second API, on top of your IO one.
This is a very simple scheme, suitable for a home project. I would suggest some error handling in evaluation, like so:
import traceback
try:
result = str(eval(message))
except Exception:
result = traceback.format_exc()
self.request.sendall(str.encode(result + '\n'))
This way your caller will receive a printout of the exception traceback in the returned message. This will make it much, much easier to debug bad calls.
NOTE If this is public-facing, you cannot do this. All input must be sanitised. You will have to parse each instruction and compare it to the list of available (and desirable) commands, and verify input validity and validity ranges for everything. For such a scenario you are better off simply using one of the input validation systems used for web services, where this problem receives a great deal of attention.

Categories

Resources