Python send cmd on socket - python

I have a simple question about Python:
I have another Python script listening on a port on a Linux machine.
I have made it so I can send a request to it, and it will inform another system that it is alive and listening.
My problem is that I don't know how to send this request from another python script running on the same machine (blush)
I have a script running every minute, and I would like to expand it to also send this request. I dont expect to get a response back, my listening-script postes to a database.
In Internet Explorer, I write like this: http://192.168.1.46:8193/?Ping
I would like to know how to do this from Python, and preferably just send and not hang if the other script is not running.
thanks
Michael

It looks like you are doing an HTTP request, rather than an ICMP ping.
urllib2, built-in to Python, can help you do that.
You'll need to override the timeout so you aren't hanging too long. Straight from that article, above, here is some example code for you to tweak with your desired time-out and URL.
import socket
import urllib2
# timeout in seconds
timeout = 10
socket.setdefaulttimeout(timeout)
# this call to urllib2.urlopen now uses the default timeout
# we have set in the socket module
req = urllib2.Request('http://www.voidspace.org.uk')
response = urllib2.urlopen(req)

import urllib2
try:
response = urllib2.urlopen('http://192.168.1.46:8193/?Ping', timeout=2)
print 'response headers: "%s"' % response.info()
except IOError, e:
if hasattr(e, 'code'): # HTTPError
print 'http error code: ', e.code
elif hasattr(e, 'reason'): # URLError
print "can't connect, reason: ", e.reason
else:
raise # don't know what it is

This is a bit outside my knowledge, but maybe this question might help?
Ping a site in Python?

Considered Twisted? What you're trying to achieve could be taken straight out of their examples. It might be overkill, but if you'll eventually want to start adding authentication, authorization, SSL, etc. you might as well start in that direction.

Related

Python program to find active port on a website?

My college has some ports. Something like this
http://www.college.in:913
I want a program to find the active ones. I mean I want those port number in which the website is working.
Here is a code. But it takes a lot of time.
from urllib.request import Request, urlopen
from urllib.error import URLError, HTTPError
for i in range(1,10000):
req = Request("http://college.edu.in:"+str(i))
try:
response = urlopen(req)
except URLError as e:
print("Error at port"+str(i) )
else:
print ('Website is working fine'+str(i))
It might be faster to try open a socket connection to each port in the range and then only try to make a request if the socket is actually open. But it's often slow to iterate through a bunch of ports. if it takes 0.5 seconds for each, and you're scanning 10000 ports that's a lot of time waiting.
# create an INET, STREAMing socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# now connect to the web server on port 80 - the normal http port
s.connect(("www.python.org", 80))
s.close()
from https://docs.python.org/3/howto/sockets.html
You might also consider profiling the code and finding out where the slow parts are.
You can use python-nmap, which is similar to nmap.

Using proxies with python requests

I am writing a crawler using multiple proxies, basically (I have a pool of verified proxies) in a single process, I start 30 threads, each random picks one of the proxies and use it to fetch some url and I set the timeout of each request to 30 seconds.
However, after running for a while, I got the error of opening too many files, I guess there are some connections not closed?
And if I don't use proxy with same number of threads, there are no such error.
Could someone help?
Code:
code starting threads:
...
# the queue of url for request
queue.add_url(url)
for i in range(numOfThreads):
x = crawlerThread(...)
threadList.append(x)
x.start()
time.sleep(30)
...
code in crawling:
while currentUrl:
# different sessions use different ip of the servers
sessionId = (sessionId+1) % len(self.sessions)
try:
session, proxy = self.random_use_ip_session_proxy(sessionId)
if proxy:
# if proxy is not None, use proxy, otherwise use my own ip, returned proxy is list of two elements,
# the first one is proxy, second one is for counting
response = session.get(currentUrl, timeout=60, verify=False, proxies=proxy[0])
else:
response = session.get(currentUrl, timeout=60, verify=False)
except Exception as e:
# some error handling
... # analyze the code and produce more url
Edit:
Now the program won't report too many open file error (I am still seeing the number of socket connects grow rapidly till 10000), but suddenly it just stopped with no error, is it possible somehow it is killed by kernel? Where can I check this?
Just as Cory Shay mentioned in the comment, response.close() will close the connection, but the content of response is still available.

How can I implement a simple web server using Python without using any libraries?

I need to implement a very simple web-server-like app in Python which would perform basic HTTP requests and responses and display very basic output on the web page. I am not too concerned about actually coding it in Python, but I am not sure where to start? How to set this up? One file? Multiple files? I guess I have no idea how to approach the fact that this is a "server" - so I am unfamiliar with how to approach dealing with HTTP requests/sockets/processing requests, etc. Any advice? Resources?
You should look at the SimpleHttpServer (py3: http.server) module.
Depending on what you're trying to do, you can either just use it, or check out the module's source (py2, py3) for ideas.
If you want to get more low-level, SimpleHttpServer extends BaseHttpServer (source) to make it just work.
If you want to get even more low-level, take a look at SocketServer (source: py2, py3).
People will often run python like python -m SimpleHttpServer (or python3 -m http.server) if they just want to share a directory: it's a fully functional and... simple server.
You can use socket programming for this purpose. The following snippet creates a tcp socket and listens on port 9000 for http requests:
from socket import *
def createServer():
serversocket = socket(AF_INET, SOCK_STREAM)
serversocket.bind(('localhost',9000))
serversocket.listen(5)
while(1):
(clientsocket, address) = serversocket.accept()
clientsocket.send("HTTP/1.1 200 OK\n"
+"Content-Type: text/html\n"
+"\n" # Important!
+"<html><body>Hello World</body></html>\n")
clientsocket.shutdown(SHUT_WR)
clientsocket.close()
serversocket.close()
createServer()
Start the server, $ python server.py.
Open http://localhost:9000/ in your web-browser (which acts as client). Then in the browser window, you can see the text "Hello World" (http response).
EDIT**
The previous code was only tested on chrome, and as you guys suggested about other browsers, the code was modified as:
To make the response http-alike you can send in plain header with http version 1.1, status code 200 OK and content-type text/html.
The client socket needs to be closed once response is submitted as it's a TCP socket.
To properly close the client socket, shutdown() needs to be called socket.shutdown vs socket.close
Then the code was tested on chrome, firefox (http://localhost:9000/) and simple curl in terminal (curl http://localhost:9000).
I decided to make this work in Python 3 and make it work for Chrome to use as an example for an online course I am developing. Python 3 of course needs encode() and decode() in the right places. Chrome - really wants to send its GET request before it gets data. I also added some error checking so it cleans up its socket if you abort the server or it blows up:
def createServer():
serversocket = socket(AF_INET, SOCK_STREAM)
try :
serversocket.bind(('localhost',9000))
serversocket.listen(5)
while(1):
(clientsocket, address) = serversocket.accept()
rd = clientsocket.recv(5000).decode()
pieces = rd.split("\n")
if ( len(pieces) > 0 ) : print(pieces[0])
data = "HTTP/1.1 200 OK\r\n"
data += "Content-Type: text/html; charset=utf-8\r\n"
data += "\r\n"
data += "<html><body>Hello World</body></html>\r\n\r\n"
clientsocket.sendall(data.encode())
clientsocket.shutdown(SHUT_WR)
except KeyboardInterrupt :
print("\nShutting down...\n");
except Exception as exc :
print("Error:\n");
print(exc)
serversocket.close()
print('Access http://localhost:9000')
createServer()
The server also prints out the incoming HTTP request. The code of course only sends text/html regardless of the request - even if the browser is asking for the favicon:
$ python3 server.py
Access http://localhost:9000
GET / HTTP/1.1
GET /favicon.ico HTTP/1.1
^C
Shutting down...
But it is a pretty good example that mostly shows why you want to use a framework like Flask or DJango instead of writing your own. Thanks for the initial code.
There is a very simple solution mentioned above, but the solution above doesn't work. This solution is tested on chrome and it works. This is python 3 although it may work on python 2 since I never tested it.
from socket import *
def createServer():
serversocket = socket(AF_INET, SOCK_STREAM)
serversocket.bind(('localhost',9000))
serversocket.listen(5)
while(1):
(clientsocket, address) = serversocket.accept()
clientsocket.send(bytes("HTTP/1.1 200 OK\n"
+"Content-Type: text/html\n"
+"\n" # Important!
+"<html><body>Hello World</body></html>\n",'utf-8'))
clientsocket.shutdown(SHUT_WR)
clientsocket.close()
serversocket.close()
createServer()
This is improved from the answer that was accepted, but I will post this so future users can use it easily.

How do I delete useless connections in my python script?

I'd better use the following sample codes to explain my problem:
while True:
NewThread = threading.Thread(target = CheckSite, args = ("http://example.com", "http://demo.com"))
NewThread.start()
time.sleep(300)
def CheckSite(Url1, Url2):
try:
Response1 = urllib2.urlopen(Url1)
Response2 = urllib2.urlopen(Url2)
del Response1
del Response2
except Exception, reason:
print "How should I delete Response1 and Response2 when exception occurs?"
del Response1
del Response2 #### You can't simply write this as Reponse2 might not even exist if exception shows up running Response1
I've wrote a really looong script, and it's used to check different sites running status(response time or similar stuff), just like what I did in the previous codes, I use couple of threads to check different site separately. As you can see in each thread there would be several server requests and of course you will get 403 or similar every now and then. I always think those wasted connections(ones with exceptions) would be collected by some kind of garbage collector in python, so I just leave them alone.
But when I check my network monitor, I found those wasted connections still there wasting resources. The longer the script running, the more wasted connections appears. I really don't want to do try-except clause each time sending server request so that del responsecan be used in each except part to destroy the wasted connection. There gotta be a better way to do this, anybody can help me out?
What exactly do you expect "delete" to mean in this context, anyway, and what are you hoping to accomplish?
Python has automatic garbage collection. These objects are defined, further, in such a way that the connection will be closed whenever the garbage collector gets around to collecting the corresponding objects.
If you want to ensure that connections are closed as soon as you no longer need the object, you can use the with construct. For example:
def CheckSite(Url1, Url2):
with urllib2.urlopen(Url1) as Response1:
with urllib2.urlopen(Url2) as Response2:
# do stuff
I'd also suggest to use the with statement in conjunction with the contextlib.closing function.
It should close the connection when it finishes the job or when it gets an exception.
Something like:
with contextlib.closing(urllib2.open(url)) as reponse:
pass
#del response #to assure the connection does not have references...
You shoud use Response1.close(). with doesn't work with urllib2.urlopen directly, but see the contextlib.closing example in the Python documentation.
Connections can stay open for hours if not properly closed, even if the process creating them exits, due the reliable packet delivery features of TCP.
You should not check for Exception rather you should catch URLError as noted in the Documentation.
If an exception isn't thrown, does the connection persist? Maybe what you're looking for is
try:
Response1 = urllib2.urlopen(Url1)
Response2 = urllib2.urlopen(Url2)
Response1.close()
Response2.close()
except URLError, reason:
print "How should I delete Response1 and Response2 when exception occurs?"
if Response2 is not None:
Response2.close()
elif Response1 is not None:
Response1.close()
But I don't understand why you're encapsulating both in a single try. I would do the following personally.
def CheckSites(Url1, Url2):
try:
Response1 = urllib2.urlopen(Url1)
except URLError, reason:
print "Response 1 failed"
return
try:
Response2 = urllib2.urlopen(Url2)
except URLError, reason:
print "Response 2 failed"
## close Response1
Response1.close()
## do something or don't based on 1 passing and 2 failing
return
print "Both responded"
## party time. rm -rf /
Note that this accomplishes the same thing because in your code, if Url1 fails, you'll never even try to open the Url2 connection.
** Side Note **
Threading is really not helping you here at all. You might as well just try them sequentially because only one thread is going to be running at a time.
http://dabeaz.blogspot.com/2009/08/inside-inside-python-gil-presentation.html
http://wiki.python.org/moin/GlobalInterpreterLock

How to handle timeouts with httplib (python 2.6)?

I'm using httplib to access an api over https and need to build in exception handling in the event that the api is down.
Here's an example connection:
connection = httplib.HTTPSConnection('non-existent-api.com', timeout=1)
connection.request('POST', '/request.api', xml, headers={'Content-Type': 'text/xml'})
response = connection.getresponse()
This should timeout, so I was expecting an exception to be raised, and response.read() just returns an empty string.
How can I know if there was a timeout? Even better, what's the best way to gracefully handle the problem of a 3rd-party api being down?
Even better, what's the best way to gracefully handle the problem of a 3rd-party api being down?
what's mean API is down , API return http 404 , 500 ...
or you mean when the API can't be reachable ?
first of all i don't think you can know if a web service in general is down before trying to access it so i will recommend for first one you can do like this:
import httplib
conn = httplib.HTTPConnection('www.google.com') # I used here HTTP not HTTPS for simplify
conn.request('HEAD', '/') # Just send a HTTP HEAD request
res = conn.getresponse()
if res.status == 200:
print "ok"
else:
print "problem : the query returned %s because %s" % (res.status, res.reason)
and for checking if the API is not reachable i think you will be better doing a try catch:
import httplib
import socket
try:
# I don't think you need the timeout unless you want to also calculate the response time ...
conn = httplib.HTTPSConnection('www.google.com')
conn.connect()
except (httplib.HTTPException, socket.error) as ex:
print "Error: %s" % ex
You can mix the two ways if you want something more general ,Hope this will help
urllib and httplib don't expose timeout. You have to include socket and set the timeout there:
import socket
socket.settimeout(10) # or whatever timeout you want
This is what I found to be working correctly with httplib2. Posting it as it might still help someone :
import httplib2, socket
def check_url(url):
h = httplib2.Http(timeout=0.1) #100 ms timeout
try:
resp = h.request(url, 'HEAD')
except (httplib2.HttpLib2Error, socket.error) as ex:
print "Request timed out for ", url
return False
return int(resp[0]['status']) < 400

Categories

Resources