Query machine for hostname - python

I want to be able to scan a network of servers and match IP addresses to hostnames.
I saw a lot of questions about this (with a lot of down votes), but none are exactly what I'm looking for.
So I've tried python's socket library socket.gethostbyaddr(ip). But this only returns results if I have a DNS setup or the IP-to-host mapping is in my hosts file.
I want to be able to ask a machine for their hostname, rather than querying DNS.
How can a query a Linux machine for their hostname?
Preferably using python or bash, but other ways are good too.

You can remotely execute the command hostname command on these machines to acquire the Hostname

Related

Monitor all sorts of ip address in Python or command line

I am wondering if there is a way to monitor all data flow from the ports of an IP that might not be in my local network. I prefer doing this in Python and/or command line. Thank you.
I'm thinking you might want to try nmap, which is for command line. https://nmap.org/
"Nmap uses raw IP packets in novel ways to determine what hosts are available on the network, what services (application name and version) those hosts are offering, what operating systems (and OS versions) they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics. It was designed to rapidly scan large networks, but works fine against single hosts."

get IP adresses of my local network

I am working on a GUI program to command power supplies by Ethernet.
I have the DHCP of my computer activated, therefore I guess that the IP adresses of my power supplies are fixed by my computer.
I would like to know the IP addresses of my power supplies, in order to communicate with them through the TCP/IP protocol, using Python.
For the moment, I use a program called LXI discovery tools, and while I run it, the Window command arp -a command gives me the IP adresses of my power supplies.
The problem is that I need to run this LXI program. Is it obligatory?
Owing to the DCHP, my computer is the one which sets the IP addresses, therefore isn't there a way to get those addresses more easily?
Moreover, is the Python socket library able to help me?
Finally I solved my problem, using statique IP addresses. Therefore I know them and I don't need anymore to "scan" my network.

Python Multiprocessing with Distributed Cluster Using Pathos

I am trying to to make use of multiprocessing across several different computers, which pathos seems geared towards: "Pathos is a framework for heterogenous computing. It primarily provides the communication mechanisms for configuring and launching parallel computations across heterogenous resources." In looking at the documentation, however, I am at a loss as to how to get a cluster up and running. I am looking to:
Set up a remote server or set of remote servers with secure authentication.
Securely connect the the remote server(s).
Map a task across all CPUs in both the remote servers and my local machine using a straightforward API like pool.map in the standard multiprocessing package (like the pseudocode in this related question).
I do not see an example for (1) and I do not understand the tunnel example provided for (2). The example does not actually connect to an existing service on the localhost. I would also like to know if/how I can require this communication to come with a password/key of some kind that would prevent someone else from connecting to the server. I understand this uses SSH authentication, but absent a preexisting key that only insures that the traffic is not read as it passes over the Internet, but does nothing to prevent someone else from hijacking the server.
I'm the pathos author. Basically, for (1) you can use pathos.pp to connect to another computer through a socket connection. pathos.pp has almost exactly the same API as pathos.multiprocessing, although with pathos.pp you can give the address and port of a remote host to connect to, using the keyword servers when setting up the Pool.
However, if you want to make a secure connection with SSH, it's best to establish a SSH-tunnel connection (as in the example you linked to), and then pass localhost and the local port number to the servers keyword in Pool. This will then connect to the remote pp-worker through the ssh tunnel. See:
https://github.com/uqfoundation/pathos/blob/master/examples/test_ppmap2.py and
http://www.cacr.caltech.edu/~mmckerns/pathos.html
Lastly, if you are using pathos.pp with a remote server, as above, you should be already doing (3). However, it can be more efficient (for an embarrassingly parallel enough set of jobs), that you nest the parallel maps… so first use pathos.pp.ParallelPythonPool to build a parallel map across servers, then call a N-way job using a parallel map in pathos.multiprocessing.ProcessingPool inside the function you are mapping with pathos.pp. This will minimize the communication across the remote connection.
Also, you don't need to give a SSH password, if you have ssh-agent working for you. See: http://mah.everybody.org/docs/ssh. Pathos assumes for parallel maps across remote servers, you will have ssh-agent working and you won't need to type your password every time there's a connection.
EDIT: added example code on your question here: Python Multiprocessing with Distributed Cluster

Decentralized networking in Python - How?

I want to write a Python script that will check the users local network for other instances of the script currently running.
For the purposes of this question, let's say that I'm writing an application that runs solely via the command line, and will just update the screen when another instance of the application is "found" on the local network. Sample output below:
$ python question.py
Thanks for running ThisApp! You are 192.168.1.101.
Found 192.168.1.102 running this application.
Found 192.168.1.104 running this application.
What libraries/projects exist to help facilitate something like this?
One of the ways to do this would be the Application under question is broadcasting UDP packets and your application is receiving that from different nodes and then displaying it. Twisted Networking Framework provides facilities for doing such a job. The documentation provides some simple examples too.
Well, you could write something using the socket module. You would have to have two programs though, a server on the users local computer, and then a client program that would interface with the server. The server would also use the select module to listen for multiple connections. You would then have a client program that sends something to the server when it is run, or whenever you want it to. The server could then print out which connections it is maintaining, including the details such as IP address.
This is documented extremely well at this link, more so than you need but it will explain it to you as it did to me. http://ilab.cs.byu.edu/python/
You can try broadcast UDP, I found some example here: http://vizible.wordpress.com/2009/01/31/python-broadcast-udp/
You can have a server-based solution: a central server where clients register themselves, and query for other clients being registered. A server framework like Twisted can help here.
In a peer-to-peer setting, push technologies like UDP broadcasts can be used, where each client is putting out a heartbeat packet ever so often on the network, for others to receive. Basic modules like socket would help with that.
Alternatively, you could go for a pull approach, where the interesting peer would need to discover the others actively. This is probably the least straight-forward. For one, you need to scan the network, i.e. find out which IPs belong to the local network and go through them. Then you would need to contact each IP in turn. If your program opens a TCP port, you could try to connect to this and find out your program is running there. If you want your program to be completely ignorant of these queries, you might need to open an ssh connection to the remote IP and scan the process list for your program. All this might involve various modules and libraries. One you might want to look at is execnet.

facing problem when trying to send an email using python

I wrote the code like this
import smtplib
server=smtplib.SMTP('localhost')
Then it raised an error like
error: [Errno 10061] No connection could be made because the target machine actively refused it
I am new to SMTP, can you tell what exactly the problem is?
It sounds like SMTP is not set up on the computer you are trying this from. Try using your ISP's mail server (often something like mail.example.com) or make sure you have an SMTP server installed locally.
Rather than trying to install smtp library locally, you can setup a simple smtp server on a console.
Do this:
python -m smtpd -n -c DebuggingServer localhost:1025
And all mails will be printed to the console.
To send e-mail using the Python SMTP module you need to somehow obtain the name of a valid mail exchanger (MX). That's either a local hub or smart relay of your own or you can query DNS for the public MX records for each target host/domain name.
This requirement is glossed over in the docs. It's a horrible omission in Python's standard libraries is that they don't provide an easy way to query DNS for an MX record. (There are rather nice third party Python DNS libraries such as DNSPython and PyDNS with extensive support for way more DNS than you need for anything related to e-mail).
In general you're probably better off using a list of hubs or relays from your own network (or ISP). This is because your efforts to send mail directly to the published MX hosts may otherwise run afoul of various attempts to fight spam. (For example it frequently won't be possible from wireless networks in coffee shops, hotels, and across common household cable and DSL connections; most of those address ranges are listed in various databases as potential sorts of spam). In that case you could store and/or retrieve the names/address of your local mail hubs (or smart relays) through any means you like. It could be a .cfg file (ConfigParser), or through an LDAP or SQL query or even (gasp!) hard-coded into your scripts.
If, however, your code is intended to run on a suitable network (for example in a colo, or a data center) then you'll have to do your own MX resolution. You could install one of the aforementioned PyPI packages. If you need to limit yourself to the standard libraries then you might be able to rely on the commonly available dig utility that's included with most installations of Linux, MacOS X, Solaris, FreeBSD and other fine operating systems.
In that case you'd call a command like dig +short aol.com mx | awk '{print $NF}' through subprocess.Popen() which can be done with this rather ugly one-liner:
mxers = subprocess.Popen("dig +short %s mx | awk '{print $NF}'"
% target_domain, stdout=subprocess.PIPE,
shell=True).communicate()[0].split()
Then you can attempt to make an SMTP connection to each of the resulting hostnames in
turn. (This is fine so long as your "target_domain" value is adequately sanitized; don't pass untrusted data through Popen() with shell=True).
The safer version looks even hairier:
mxers = subprocess.Popen(["dig", "+short", target_domain, "mx"],
stdout=subprocess.PIPE).communicate()[0].split()[1::2]
... where the slice/stride at the end is replacing the call to awk and thus obviating
the need for shell=True

Categories

Resources