I have written a Python script which POSTs data to an apache webserver and the data arrives nicely in $_POST and $_FILES aray. Now I want to implement this same thing in Lua but I can't get it going yet.
My code in Python looks something like this:
try:
wakeup()
socket.setdefaulttimeout(TIMEOUT)
opener = urllib2.build_opener(MultipartPostHandler.MultipartPostHandler)
host = HOST
func = "post_img"
url = "http://{0}{1}?f={2}&nodemac={3}&time={4}".format(host, URI, func, nodemac, timestamp)
if os.path.isfile(filename):
data = {"data":open(filename,"rb")}
print "POST time "+str(time.time())
response = opener.open(url, data, timeout=TIMEOUT)
retval = response.read()
if "SUCCESS" in retval:
return 0
else:
print "RETVAL: "+retval
return 99
except Exception as e:
print "EXCEPTION time "+str(time.time())+" - "+str(e)
return 99
The Lua code I have come up with thus far:
#! /usr/bin/lua
http = require("socket.http")
ltn12 = require("ltn12")
http.request{
url = "localhost/test.php?test=SEMIOS",
method = "POST",
headers = {
["Content-Type"] = "multipart/form-data; boundary=127.0.1.1.1000.17560.1375897994.242.1",
["Content-Length"] = 7333
},
source = ltn12.source.file(io.open("test.gif")),
sink = ltn12.sink.table(response_body)
}
print(response_body[1]) --response to request
but this code keeps getting me this on execution:
$ ./post.lua
/usr/bin/lua: ./post.lua:17: attempt to index global 'response_body' (a nil value)
stack traceback:
./post.lua:17: in main chunk
[C]: ?
reg#DesktopOffice:~$
There are several examples of sending POST data using Lua: from the author of luasocket and SO. This example works directly with files, which is very close to what you are using.
Your description of this question doesn't match the comment you provided.
Related
I have 2 python scripts, 1 is the server itself and the other for POST-ing and image.
Server.py ->
app = Flask(__name__)
#app.route("/", methods=["GET", "POST"])
def index():
if request.method == "POST":
file = request.files.get('file')
if file is None or file.filename == "":
return jsonify({"error": "nothing inside file"})
try:
image_bytes = file.read()
pillow_img = imageio.imread(image_bytes, pilmode='RGB')
start_time = time.time()
result = tfnet.return_predict(pillow_img)[0]
end_time = time.time()
result['response_time'] = end_time - start_time
return jsonify({'Output': str(result)})
except Exception as e:
return jsonify({"error": str(e)})
return "OK"
I'm trying to re-create the test.py into java, I'm able to POST to the server and get 200 response. But I'm having trouble sending and image to the server.py thats running.
I'm breaking this problem into 2 parts: 1) Try POST a json for my server.py to recognize it as some JSON instead of None. 2) Try encode an image and POST it to server.py to recognize and send a response back.
I'm not able to do both the steps.
Test.py ->
import requests
resp = requests.post("http://xxxxxxxxxxxxxx:8080/",
files={'file': open('D://Jupyter//GCP Deploy//darkflow-master//1.jpg', 'rb')})
print(resp.json())
and the respective java file that doesnt send the image but a json.
public class HttpURLConnectionExample {
public static void main(String[] args) throws IOException {
try (CloseableHttpClient httpclient = HttpClients.createDefault()) {
HttpPost httpPost = new HttpPost("http://xxxxxxxxxxxxxx:8080/");
String json = "{\r\n" +
" \"file\": \"string value\"\r\n" +
"}";
StringEntity stringEntity = new StringEntity(json);
httpPost.setEntity(stringEntity);
System.out.println("Executing request " + httpPost.getRequestLine());
// Create a custom response handler
ResponseHandler < String > responseHandler = response -> {
int status = response.getStatusLine().getStatusCode();
if (status >= 200 && status < 300) {
HttpEntity entity = response.getEntity();
return entity != null ? EntityUtils.toString(entity) : null;
} else {
throw new ClientProtocolException("Unexpected response status: " + status);
}
};
String responseBody = httpclient.execute(httpPost, responseHandler);
System.out.println("----------------------------------------");
System.out.println(responseBody);
}
}
}
The only success I'm having from this is that the server recognizes the POST request, but doesnt go into the try block, the file in the Server.py is None.
try convert the image to base64 on client, put it in json, send it to server, then in the server you can convert base64 to image
file = request.files.get('file')
This line in Server.py was changes to request.get_data() to get the raw data being transfered. However, the problem now is with the bytes file of image. Java POST request of bytes image is of varying length when run multiple times, Python POST request has fixed bytes of image when run multiple times.
eg: Python POST request of image => 474525 length (fixed on every run)
Java POST request of image => 474741, 474744, 474740 length (Varying on every run)
I've been trying to debug a Python script I've inherited. It's trying to POST a CSV to a website via HTTPLib. The problem, as far as I can tell, is that HTTPLib doesn't handle receiving a 100-continue response, as per python http client stuck on 100 continue. Similarly to that post, this "Just Works" via Curl, but for various reasons we need this to run from a Python script.
I've tried to employ the work-around as detailed in an answer on that post, but I can't find a way to use that to submit the CSV after accepting the 100-continue response.
The general flow needs to be like this:
-> establish connection
-> send data including "expect: 100-continue" header, but not including the JSON body yet
<- receive "100-continue"
-> using the same connection, send the JSON body of the request
<- receive the 200 OK message, in a JSON response with other information
Here's the code in its current state, with my 10+ other commented remnants of other attempted workarounds removed:
#!/usr/bin/env python
import os
import ssl
import http.client
import binascii
import logging
import json
#classes taken from https://stackoverflow.com/questions/38084993/python-http-client-stuck-on-100-continue
class ContinueHTTPResponse(http.client.HTTPResponse):
def _read_status(self, *args, **kwargs):
version, status, reason = super()._read_status(*args, **kwargs)
if status == 100:
status = 199
return version, status, reason
def begin(self, *args, **kwargs):
super().begin(*args, **kwargs)
if self.status == 199:
self.status = 100
def _check_close(self, *args, **kwargs):
return super()._check_close(*args, **kwargs) and self.status != 100
class ContinueHTTPSConnection(http.client.HTTPSConnection):
response_class = ContinueHTTPResponse
def getresponse(self, *args, **kwargs):
logging.debug('running getresponse')
response = super().getresponse(*args, **kwargs)
if response.status == 100:
setattr(self, '_HTTPConnection__state', http.client._CS_REQ_SENT)
setattr(self, '_HTTPConnection__response', None)
return response
def uploadTradeIngest(ingestFile, certFile, certPass, host, port, url):
boundary = binascii.hexlify(os.urandom(16)).decode("ascii")
headers = {
"accept": "application/json",
"Content-Type": "multipart/form-data; boundary=%s" % boundary,
"Expect": "100-continue",
}
context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
context.load_cert_chain(certfile=certFile, password=certPass)
connection = ContinueHTTPSConnection(
host, port=port, context=context)
with open(ingestFile, "r") as fh:
ingest = fh.read()
## Create form-data boundary
ingest = "--%s\r\nContent-Disposition: form-data; " % boundary + \
"name=\"file\"; filename=\"%s\"" % os.path.basename(ingestFile) + \
"\r\n\r\n%s\r\n--%s--\r\n" % (ingest, boundary)
print("pre-request")
connection.request(
method="POST", url=url, headers=headers)
print("post-request")
#resp = connection.getresponse()
resp = connection.getresponse()
if resp.status == http.client.CONTINUE:
resp.read()
print("pre-send ingest")
ingest = json.dumps(ingest)
ingest = ingest.encode()
print(ingest)
connection.send(ingest)
print("post-send ingest")
resp = connection.getresponse()
print("response1")
print(resp)
print("response2")
print(resp.read())
print("response3")
return resp.read()
But this simply returns a 400 "Bad Request" response. The problem (I think) lies with the formatting and type of the "ingest" variable. If I don't run it through json.dumps() and encode() then the HTTPConnection.send() method rejects it:
ERROR: Got error: memoryview: a bytes-like object is required, not 'str'
I had a look at using the Requests library instead, but I couldn't get it to use my local certificate bundle to accept the site's certificate. I have a full chain with an encrypted key, which I did decrypt, but still ran into constant SSL_VERIFY errors from Requests. If you have a suggestion to solve my current problem with Requests, I'm happy to go down that path too.
How can I use HTTPLib or Requests (or any other libraries) to achieve what I need to achieve?
In case anyone comes across this problem in future, I ended up working around it with a bit of a kludge. I tried HTTPLib, Requests, and URLLib3 are all known to not handle the 100-continue header, so... I just wrote a Python wrapper around Curl via the subprocess.run() function, like this:
def sendReq(upFile):
sendFile=f"file=#{upFile}"
completed = subprocess.run([
curlPath,
'--cert',
args.cert,
'--key',
args.key,
targetHost,
'-H',
'accept: application/json',
'-H',
'Content-Type: multipart/form-data',
'-H',
'Expect: 100-continue',
'-F',
sendFile,
'-s'
], stdout=subprocess.PIPE, universal_newlines=True)
return completed.stdout
The only issue I had with this was that it fails if Curl was built against the NSS libraries, which I resolved by including a statically-built Curl binary with the package, the path to which is contained in the curlPath variable in the code. I obtained this binary from this Github repo.
this is a test script to request data from Rovi API, provided by the API itself.
test.py
import requests
import time
import hashlib
import urllib
class AllMusicGuide(object):
api_url = 'http://api.rovicorp.com/data/v1.1/descriptor/musicmoods'
key = 'my key'
secret = 'secret'
def _sig(self):
timestamp = int(time.time())
m = hashlib.md5()
m.update(self.key)
m.update(self.secret)
m.update(str(timestamp))
return m.hexdigest()
def get(self, resource, params=None):
"""Take a dict of params, and return what we get from the api"""
if not params:
params = {}
params = urllib.urlencode(params)
sig = self._sig()
url = "%s/%s?apikey=%s&sig=%s&%s" % (self.api_url, resource, self.key, sig, params)
resp = requests.get(url)
if resp.status_code != 200:
# THROW APPROPRIATE ERROR
print ('unknown err')
return resp.content
from another script I import the module:
from roviclient.test import AllMusicGuide
and create an instance of the class inside a mood function:
def mood():
test = AllMusicGuide()
print (test.get('[moodids=moodids]'))
according to documentation, the following is the syntax for requests:
descriptor/musicmoods?apikey=apikey&sig=sig [&moodids=moodids] [&format=format] [&country=country] [&language=language]
but running the script I get the following error:
unknown err
<h1>Gateway Timeout</h1>:
what is wrong?
"504, try once more. 502, it went through."
Your code is fine, this is a network issue. "Gateway Timeout" is a 504. The intermediate host handling your request was unable to complete it. It made its own request to another server on your behalf in order to handle yours, but this request took too long and timed out. Usually this is because of network congestion in the backend; if you try a few more times, does it sometimes work?
In any case, I would talk to your network administrator. There could be any number of reasons for this and they should be able to help fix it for you.
In python, how would I check if a url ending in .jpg exists?
ex:
http://www.fakedomain.com/fakeImage.jpg
thanks
The code below is equivalent to tikiboy's answer, but using a high-level and easy-to-use requests library.
import requests
def exists(path):
r = requests.head(path)
return r.status_code == requests.codes.ok
print exists('http://www.fakedomain.com/fakeImage.jpg')
The requests.codes.ok equals 200, so you can substitute the exact status code if you wish.
requests.head may throw an exception if server doesn't respond, so you might want to add a try-except construct.
Also if you want to include codes 301 and 302, consider code 303 too, especially if you dereference URIs that denote resources in Linked Data. A URI may represent a person, but you can't download a person, so the server will redirect you to a page that describes this person using 303 redirect.
>>> import httplib
>>>
>>> def exists(site, path):
... conn = httplib.HTTPConnection(site)
... conn.request('HEAD', path)
... response = conn.getresponse()
... conn.close()
... return response.status == 200
...
>>> exists('http://www.fakedomain.com', '/fakeImage.jpg')
False
If the status is anything other than a 200, the resource doesn't exist at the URL. This doesn't mean that it's gone altogether. If the server returns a 301 or 302, this means that the resource still exists, but at a different URL. To alter the function to handle this case, the status check line just needs to be changed to return response.status in (200, 301, 302).
thanks for all the responses everyone, ended up using the following:
try:
f = urllib2.urlopen(urllib2.Request(url))
deadLinkFound = False
except:
deadLinkFound = True
Looks like http://www.fakedomain.com/fakeImage.jpg automatically redirected to http://www.fakedomain.com/index.html without any error.
Redirecting for 301 and 302 responses are automatically done without giving any response back to user.
Please take a look HTTPRedirectHandler, you might need to subclass it to handle that.
Here is the one sample from Dive Into Python:
http://diveintopython3.ep.io/http-web-services.html#redirects
There are problems with the previous answers when the file is in ftp server (ftp://url.com/file), the following code works when the file is in ftp, http or https:
import urllib2
def file_exists(url):
request = urllib2.Request(url)
request.get_method = lambda : 'HEAD'
try:
response = urllib2.urlopen(request)
return True
except:
return False
Try it with mechanize:
import mechanize
br = mechanize.Browser()
br.set_handle_redirect(False)
try:
br.open_novisit('http://www.fakedomain.com/fakeImage.jpg')
print 'OK'
except:
print 'KO'
This might be good enough to see if a url to a file exists.
import urllib
if urllib.urlopen('http://www.fakedomain.com/fakeImage.jpg').code == 200:
print 'File exists'
in Python 3.6.5:
import http.client
def exists(site, path):
connection = http.client.HTTPConnection(site)
connection.request('HEAD', path)
response = connection.getresponse()
connection.close()
return response.status == 200
exists("www.fakedomain.com", "/fakeImage.jpg")
In Python 3, the module httplib has been renamed to http.client
And you need remove the http:// and https:// from your URL, because the httplib is considering : as a port number and the port number must be numeric.
Python3
import requests
def url_exists(url):
"""Check if resource exist?"""
if not url:
raise ValueError("url is required")
try:
resp = requests.head(url)
return True if resp.status_code == 200 else False
except Exception as e:
return False
The answer of #z3moon was good, but I think it is for py 2.x. For python 3.x, you may want to add request to the module call.
import urllib
def check_valid_URLs(url) -> bool:
try:
if urllib.request.urlopen(url).code == 200:
return True
else:
return False
except:
return False
I think you can try send a http request to the url and read the response.If no exception was caught,it probably exists.
I have a WS (ZOPE/PLONE) that accept some XMLRPC calls.
So, I write a python snippet of code for do a call to WS and do something.
I follow messagge format that I found here, and that's my snippet of code:
import httplib
def queryInventory():
try:
xmlrpc_envelope = '<?xml version="1.0"?>'\
'<methodCall>'\
'<methodName>easyram</methodName>'\
'<params>'\
'<param>'\
'<value>%s</value>'\
'</param>'\
'</params>'\
'</methodCall>'
params = '<EasyRAM>'\
'<authentication><user>EasyRAM</user><pwd>EasyRAM</pwd><hotel>52</hotel></authentication>'\
'<operation type="QueryInventory" rate="master"><date from="2012-03-10" to="2012-03-10" /><date from="2012-03-22" to="2012-03-22" /></operation>'\
'</EasyRAM>'
data = xmlrpc_envelope % params
print data
headers = {"Content-type": "text/xml"}
conn = httplib.HTTPSConnection('myHost')
aa = '/ws/xmlrpc/public/EasyRAM'
conn.request("POST", aa, data, headers)
response = conn.getresponse()
print "EasyRAM.queryInventory() response: status=%s, reason=%s" % (response.status, response.reason)
print "EasyRAM.queryInventory() response=%s" % response.read()
conn.close()
except Exception, ss:
print "EasyRAM.queryInventory() -> Error=%s" % ss
raise
return ''
queryInventory()
The problem is that i receive the following error message:
Invalid request The parameter, params , was omitted from the request. Make sure to specify all required parameters, and try the request again.
Like the parameter isn't passed.
If I modify my snippet by wrapping my parameter (called params) into <string></string> in that way:
xmlrpc_envelope = '<?xml version="1.0"?>'\
'<methodCall>'\
'<methodName>easyram</methodName>'\
'<params>'\
'<param>'\
'<value><string>%s</string></value>'\
'</param>'\
'</params>'\
'</methodCall>'
something happen, but isn't what I want; in fact my parameter result to be empty (or void, if you like).
Any ideas or suggestions?
PS.: I know that exists an xml-rpc library for python called xmlrpclib, but I have to develop in that way, because this is an example for client that can't use directly a library like that
I just resolved.
If I add a function like this:
def escape(s, replace=string.replace):
s = replace(s, "&", "&")
s = replace(s, "<", "<")
return replace(s, ">", ">",)
and before calling the connection method I do something like:
params = escape(params)
Then all goes well.
Hope that could be useful for future purposes