Sending an image from a server to client using HTTP in python - python

I was creating a web server that process client requests and send data through HTTP. I used python and it works perfectly for text, pdf and html files. When I tried to send a jpg image by this server, the client shows that, the image cant displayed because it contain errors in the client. I used different approaches given in this site, but failed. Can someone help me?? Image sending part of the code is given below. Thanks in advance...
req = clnt.recv(102400)
a = req.split('\n')[0].split()[1].split('/')[1]
if a.split('.')[1] == 'jpg':
path = os.path.abspath(a)
size = os.path.getsize(path)
img_file = open(a, 'rb')
bytes_read = 0
while bytes_read < size:
strng = img_file.read(1024)
if not strng:
break
bytes_read += len(strng)
clnt.sendall('HTTP/1.0 200 OK\n\n' + 'Content-type: image/jpeg"\n\n' + strng)
clnt.close()
time.sleep(30)

You are overwriting the string each time you perform a read on the file. If the file is greater than 1024 bytes you will lose the previously read chunk. Eventually the last read will return an empty string at EOF so strng will end up being the empty string.
strng = img_file.read(1024)
I think that you meant to use +=?:
strng += img_file.read(1024)
There is not really any advantage to reading the file in chunks like this. Reading all the file contents in one read will consume the same amount of memory. You could do this instead:
if a.split('.')[1] == 'jpg':
path = os.path.abspath(a)
with open(a, 'rb') as img_file:
clnt.sendall('HTTP/1.0 200 OK\n\n' + 'Content-type: image/jpeg"\n\n' + img_file.read())
clnt.close()
time.sleep(30)
Also, strictly speaking those \n characters should be \r\n for HTTP.

Related

Python HTTP server giving error some time after

I coded a Python HTTP server as below and I run the server from the directory which this python file exist. I am typing "python myserver.py" in the cmd and server succesfully starts and reads the index.html in the directory but my problem is after some time my code gives the following error and closes the server
Traceback (most recent call last):
File "myserver.py", line 20, in
requesting_file = string_list[1]
IndexError: list index out of range
How can I fix this problem ?
import socket
HOST,PORT = '127.0.0.1',8082
my_socket = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
my_socket.setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR,1)
my_socket.bind((HOST,PORT))
my_socket.listen(1)
print('Serving on port ',PORT)
while True:
connection,address = my_socket.accept()
request = connection.recv(1024).decode('utf-8')
string_list = request.split(' ') # Split request from spaces
print (request)
method = string_list[0]
requesting_file = string_list[1]
print('Client request ',requesting_file)
myfile = requesting_file.split('?')[0] # After the "?" symbol not relevent here
myfile = myfile.lstrip('/')
if(myfile == ''):
myfile = 'index.html' # Load index file as default
try:
file = open(myfile,'rb') # open file , r => read , b => byte format
response = file.read()
file.close()
header = 'HTTP/1.1 200 OK\n'
if(myfile.endswith(".jpg")):
mimetype = 'image/jpg'
elif(myfile.endswith(".css")):
mimetype = 'text/css'
else:
mimetype = 'text/html'
header += 'Content-Type: '+str(mimetype)+'\n\n'
except Exception as e:
header = 'HTTP/1.1 404 Not Found\n\n'
response = '<html><body><center><h3>Error 404: File not found</h3><p>Python HTTP Server</p></center></body></html>'.encode('utf-8')
final_response = header.encode('utf-8')
final_response += response
connection.send(final_response)
connection.close()
socket.recv(n) is not guaranteed to read the entire n bytes of the message in one go and can return fewer bytes than requested in some circumstances.
Regarding your code it's possible that only the method, or part thereof, is received without any space character being present in the received data. In that case split() will return a list with one element, not two as you assume.
The solution is to check that a full message has been received. You could do that by looping until sufficient data has been received, e.g. you might ensure that some minimum number of bytes has been received by checking the length of data and looping until the minimum has been reached.
Alternatively you might continue reading until a new line or some other sentinel character is received. It's probably worth capping the length of the incoming data to avoid your server being swamped by data from a rogue client.
Finally, check whether split() returns the two values that you expect and handle accordingly if it does not. Furthermore, be very careful about the file name; what if it contains a relative path, e.g. ../../etc/passwd?

Send bitmap file in an HTTP response without using any libraries

I want to send a bitmap file in an HTTP response, but I have no idea how to do it; I tried different ways but all failed. Only text can be sent, I just want to know where to begin. This is related code
client_connection.sendall("HTTP/1.1 200 OK\n"
+"Content-Type: image/bmp\n"
+"Content-Lenth:%d"%size
+"\n"
+arr)
What to pass in place of arr? I am writing the bitmap into a test.bmp file.
You are using the wrong end-of-line-markers, and you have many typos. arr is the contents of your file:
with open('test.bmp', 'rb') as bmp:
arr = bmp.read()
client_connection.sendall("HTTP/1.1 200 OK\r\n"
+ "Content-Type: image/bmp\r\n"
+ "Content-Length: %d\r\n" % len(arr)
+ "\r\n"
+ arr)

How can I read exactly one response chunk with python's http.client?

Using http.client in Python 3.3+ (or any other builtin python HTTP client library), how can I read a chunked HTTP response exactly one HTTP chunk at a time?
I'm extending an existing test fixture (written in python using http.client) for a server which writes its response using HTTP's chunked transfer encoding. For the sake of simplicity, let's say that I'd like to be able to print a message whenever an HTTP chunk is received by the client.
My code follows a fairly standard pattern for reading a large response:
conn = http.client.HTTPConnection(...)
conn.request(...)
response = conn.getresponse()
resbody = []
while True:
chunk = response.read(1024)
if len(chunk):
resbody.append(chunk)
else:
break
conn.close();
But this reads 1024 byte chunks regardless of whether or not the server is sending 10 byte chunks or 10MiB chunks.
What I'm looking for would be something like the following:
while True:
chunk = response.readchunk()
if len(chunk):
resbody.append(chunk)
else
break
If this is not possible with http.client, is it possible with another builtin http client library? If it's not possible with a builtin client lib, is it possible with pip installable module?
I found it easier to use the requests library like so
r = requests.post(url, data=foo, headers=bar, stream=True)
for chunk in (r.raw.read_chunked()):
print(chunk)
Update:
The benefit of chunked transfer encoding is to allow the transmission of dynamically generated content. Whether a HTTP library lets you read individual chunks or not is a separate issue (see RFC 2616 - Section 3.6.1).
I can see how what you are trying to do would be useful, but the standard python http client libraries don't do what you want without some hackery (see http.client and httplib).
What you are trying to do may be fine for use in your test fixture, but in the wild there are no guarantees. It is possible for the chunking of the data read by your client to be be different from the chunking of the data sent by your server. E.g. the data could have been "re-chunked" by a proxy server before it arrived (see RFC 2616 - Section 3.2 - Framing Techniques).
The trick is to tell the response object that it isn't chunked (resp.chunked = False) so that it returns the raw bytes. This allows you to parse the size and data of each chunk as it is returned.
import http.client
conn = http.client.HTTPConnection("localhost")
conn.request('GET', "/")
resp = conn.getresponse()
resp.chunked = False
def get_chunk_size():
size_str = resp.read(2)
while size_str[-2:] != b"\r\n":
size_str += resp.read(1)
return int(size_str[:-2], 16)
def get_chunk_data(chunk_size):
data = resp.read(chunk_size)
resp.read(2)
return data
respbody = ""
while True:
chunk_size = get_chunk_size()
if (chunk_size == 0):
break
else:
chunk_data = get_chunk_data(chunk_size)
print("Chunk Received: " + chunk_data.decode())
respbody += chunk_data.decode()
conn.close()
print(respbody)

Python3 progress bar and download with gzip

i am having a little problem with the answer stated at Python progress bar and downloads
if the data downloaded was gzip encoded, the content length and the total length of the data after joining them in the for data in response.iter_content(): is different as in it is bigger cause automatically decompresses gzip-encoded responses
so the bar get longer and longer and once it become to long for a single line, it start flooding the terminal
a working example of the problem (the site is the first site i found on google that got both content-length and gzip encoding):
import requests,sys
def test(link):
print("starting")
response = requests.get(link, stream=True)
total_length = response.headers.get('content-length')
if total_length is None: # no content length header
data = response.content
else:
dl = 0
data = b""
total_length = int(total_length)
for byte in response.iter_content():
dl += len(byte)
data += (byte)
done = int(50 * dl / total_length)
sys.stdout.write("\r[%s%s]" % ('=' * done, ' ' * (50-done)))
sys.stdout.flush()
print("total data size: %s, content length: %s" % (len(data),total_length))
test("http://www.pontikis.net/")
ps, i am on linux but it should effect other os too (except windows cause \r doesn't work on it iirc)
and i am using requests.Session for cookies (and gzip) handling so a solution with urllib and other module isn't what i am looking for
Perhaps you should try disabling gzip compression or otherwise accounting for it.
The way to turn it off for requests (when using a session as you say you are):
import requests
s = requests.Session()
del s.headers['Accept-Encoding']
The header sent will now be: Accept-Encoding: Identity and the server should not attempt to use gzip compression. If instead you're trying to download a gzip-encoded file, you should not run into this problem. You will receive a Content-Type of application/x-gzip-compressed. If the website is gzip compressed, you'll receive a Content-Type of text/html for example and a Content-Encoding of gzip.
If the server always serves compressed content then you're out of luck, but no server should do that.
If you want to do something with the functional API of requests:
import requests
r = requests.get('url', headers={'Accept-Encoding': None})
Setting the header value to None via the functional API (or even in a call to session.get) removes that header from the requests.
You could replace...
dl += len(byte)
...with:
dl = response.raw.tell()
From the documentation:
tell(): Obtain the number of bytes pulled over the wire so far. May
differ from the amount of content returned by :meth:HTTPResponse.read
if bytes are encoded on the wire (e.g, compressed).
Here is a simple process bar implement with tqdm:
def _reader_generator(reader):
b = reader(1024 * 1024)
while b:
yield b
b = reader(1024 * 1024)
def raw_newline_count_gzip(fname):
f = gzip.open(fname, 'rb')
f_gen = _reader_generator(f.read)
return sum(buf.count(b'\n') for buf in f_gen)
num = raw_newline_count_gzip(fname)
(loop a gzip file):
with tqdm(total=num_ids) as pbar:
# do whatever you want
pbar.update(1)
The bar looks like:
35%|███▌ | 26288/74418 [00:05<00:09, 5089.45it/s]

Gunzipping Contents of a URL - Python

I'm back. :) Again trying to get the gzipped contents of a URL and gunzip them. This time in Python. The #SERVER section of code is the script I'm using to generate the gzipped data. The data is known good, as it works with Java. The #CLIENT section of code is the bit of code I'm using client-side to try and read that data (for eventual JSON parsing). However, somewhere in this transfer, the gzip module forgets how to read the data it created.
#SERVER
outbuf = StringIO.StringIO()
outfile = gzip.GzipFile(fileobj = outbuf, mode = 'wb')
outfile.write(data)
outfile.close()
print "Content-Encoding: gzip\n"
print outbuf.getvalue()
#CLIENT
urlReq = urllib2.Request(url)
urlReq.add_header('Accept-Encoding', '*')
urlConn = urllib2.build_opener().open(urlReq)
urlConnObj = StringIO.StringIO(urlConn.read())
gzin = gzip.GzipFile(fileobj = urlConnObj)
return gzin.read() #IOError: Not a gzipped file.
Other Notes:
outbuf.getvalue() is the same as urlConnObj.getvalue() is the same as urlConn.read()
This StackOverflow question seemed to help me out.
Apparently, it was just wise to by-pass the gzip module entirely, opting for zlib instead. Also, changing "*" to "gzip" in the "Accept-Encoding" header may've helped.
#CLIENT
urlReq = urllib2.Request(url)
urlReq.add_header('Accept-Encoding', 'gzip')
urlConn = urllib2.urlopen(urlReq)
return zlib.decompress(urlConn.read(), 16+zlib.MAX_WBITS)

Categories

Resources