Handling error response from PYCURL - python

I relatively new to python, but have been writing some basic scripts for my job to check the status of files on specific servers by ftp. I understand there are better modules for ftp, but due to security restrictions on our work computers we are limited to the basic modules installed on our system which need to handle ftp, sftp, and ftps. Pycurl is the only module we can currently work with.
Now pycurl works successfully at testing the connection by printing the directory and pushing or pulling a file to or from a server via ftp, sftp, fops. Thats not our current issue. The issue is the error response that Pycurl spits out. It doesn't display the ACTUAL error that occurred that you would see from verbose. If we put the wrong remote directory it continue to connect after showing the error in verbose then say something like "Could not access user certificates". WE would like to hand the errors so they display what actually occurred. We saw options such as BUFFERERROR but we haven't figured out how to use them properly. basically, if a sever name is incorrect we would like it to say that.
Does anybody have some experience with pycurl? or know of any debugging script to catch and display the actual errors? I would greatly appreciate it!

you can debug the error by making use of VERBOSE
c = pycurl.Curl()
c.setopt(pycurl.URL,url)
c.setopt(pycurl.HTTPHEADER, ['Authorization: Bearer ' + token)
c.setopt(pycurl.CUSTOMREQUEST, "PUT")
c.setopt(pycurl.POSTFIELDS,data)
c.setopt(pycurl.VERBOSE, 1)
c.perform()
c.close()

Related

Socket.io POST Requests from Socket.IO-Client-Swift

I am running socket.io on an Apache server through Python Flask. We're integrating it into an iOS app (using the Socket.IO-Client-Swift library) and we're having a weird issue.
From the client side code in the app (written in Swift), I can view the actual connection log (client-side in XCode) and see the connection established from the client's IP and the requests being made. The client never receives the information back (or any information back; even when using a global event response handler) from the socket server.
I wrote a very simple test script in Javascript on an HTML page and sent requests that way and received the proper responses back. With that said, it seems to likely be an issue with iOS. I've found these articles (but none of them helped fix the problem):
https://github.com/nuclearace/Socket.IO-Client-Swift/issues/95
https://github.com/socketio/socket.io-client-swift/issues/359
My next thought is to extend the logging of socket.io to find out exact what data is being POSTed to the socket namespace. Is there a way to log exactly what data is coming into the server (bear in mind that the 'on' hook on the server side that I've set up is not getting any data; I've tried to log it from there but it doesn't appear to even get that far).
I found mod_dumpio for Linux to log all POST requests but I'm not sure how well it will play with multi-threading and a socket server.
Any ideas on how to get the exact data being posted so we can at least troubleshoot the syntax and make sure the data isn't being malformed when it's sent to the server?
Thanks!
Update
When testing locally, we got it working (it was a setting in the Swift code where the namespace wasn't being declared properly). This works fine now on localhost but we are having the exact same issues when emitting to the Apache server.
We are not using mod_wsgi (as far as I know; I'm relatively new to mod_wsgi, apologies for any ignorance). We used to have a .wsgi file that called the main app script to run but we had to change that because mod_wsgi is not compatible with Flask SocketIO (as stated in the uWSGI Web Server section here). The way I am running the script now is by using supervisord to run the .py file as a daemon (using that specifically so it will autostart in the event of a server crash).
Locally, it worked great once we installed the eventlet module through pip. When I ran pip freeze on my virtual environment on the server, eventlet was installed. I uninstalled and reinstalled it just to see if that cleared anything up and that did nothing. No other Python modules that are on my local copy seem to be something that would affect this.
One other thing to keep in mind is that in the function that initializes the app, we change the port to port 80:
socketio.run(app,host='0.0.0.0',port=80)
because we have other API functions that run through a domain that is pointing to the server in this app. I'm not sure if that would affect anything but it doesn't seem to matter on the local version.
I'm at a dead end again and am trying to find anything that could help. Thanks for your assistance!
Another Update
I'm not exactly sure what was happening yet but we went ahead and rewrote some of the code, making sure to pay extra special attention to the namespace declarations within each socket event on function. It's working fine now. As I get more details, I will post them here as I figure this will be something useful for other who have the same problem. This thread also has some really valuable information on how to go about debugging/logging these types of issues although we never actually fully figured out the answer to the original question.
I assume you have verified that Apache does get the POST requests. That should be your first test, if Apache does not log the POST requests coming from iOS, then you have a different kind of problem.
If you do get the POST requests, then you can add some custom code in the middleware used by Flask-SocketIO and print the request data forwarded by Apache's mod_wsgi. The this is in file flask_socketio/init.py. The relevant portion is this:
class _SocketIOMiddleware(socketio.Middleware):
# ...
def __call__(self, environ, start_response):
# log what you need from environ here
environ['flask.app'] = self.flask_app
return super(_SocketIOMiddleware, self).__call__(environ, start_response)
You can find out what's in environ in the WSGI specification. In particular, the body of the request is available in environ['wsgi.input'], which is a file-like object you read from.
Keep in mind that once you read the payload, this file will be consumed, so the WSGI server will not be able to read from it again. Seeking the file back to the position it was before the read may work on some WSGI implementations. A safer hack I've seen people do to avoid this problem is to read the whole payload into a buffer, then replace environ['wsgi.input'] with a brand new StringIO or BytesIO object.
Are you using flask-socketio on the server side? If you are, there is a lot of debugging available in the constructor.
socketio = SocketIO(app, async_mode=async_mode, logger=True, engineio_logger=True)

Reading data that is being sent to localhost by another application

An application I'm using has the option to enable an API that sends some data when certain events occur to a URL. I configured it to send the data to http://localhost:666/simple/ and used a short program (written by someone else in C#) that takes the data and dumps them to a text file. The author said that you need to run the .exe as administrator to be able to listen to http events and it did indeed work.
I'm trying to achieve the same using python. I took this short example from the requests library and adapted it to the following:
import requests
url = 'http://localhost:666/simple/'
r = requests.get(url, stream=True)
for line in r.iter_lines():
print(line)
I launched command prompt with administrator privileges, but when I try to run this script I get the following error: ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it.
Since the other program is working correctly, I am assuming I am doing something wrong in my code and I'm looking for some help to fix it or an alternative to try out.
Requests are used for, well, making requests from a server, not for being a server.
You may want to look at the docs
use socket to listen data on concrete port

Heroku Django Logging

I'm trying to debug an internal server error in my django app, running on heroku. I'm
completely new to all of this web server stuff so I really have no idea what to do.
It seems like the stdout output is sometimes getting logged in heroku logs and sometimes not. I was reasonably sure that the program was reaching a certain line but the prints at that point are simply not showing up.
I am seeing the 500 error in my heroku logs file, but there is no stack trace or anything else in there. I am trying to create a web server to respond to GET and POST requests from various applications I have running, meaning I don't know how to debug this in a web browser, if thats even applicable. The current error is on a POST request sent to the webserver. I can't replicate this locally because the Http module I am using, http://www.python-requests.org/en/latest/ seems to be unable to connect to a local ip address.
I have done some extensive googling for the last hour and I haven't found any help. Do I need to enable logging or something somewhere in heroku? I am completely new to this so please be explicit in your explanations. I have heard mention of a way to get stack traces emailed to you but I haven't seen an explanation of how to do that. is that possible?
Thanks!
I would recommend 2 things in this case:
First: use python's logging facility rather than print statements (http://docs.python.org/2/howto/logging-cookbook.html). This gives you much more control over where your statements end up, and allows you to filter them.
Second: use a logging add-on. This vastly increases the amount of logging you can store (loggly keeps all your logs for 24 hours even in the "free" size), so you don't have to worry about the relevant information falling out before you get around to looking at it.

PyAPNS SSL3_WRITE_PENDING error

I'm using PyAPNS module and Bottle framework in demo of my app to send push notifications to all registered devices.
At the beginning everything works fine, I've followed manual for PyAPNS. But after some time my service is running in the background on server, I start to recieve error:
SSLError: [Errno 1] _ssl.c:1217: error:1409F07F:SSL routines:SSL3_WRITE_PENDING:bad write retry
After restarting service everything works fine again. What should I do with that? Or how should I run such service in background? (for now I'm just running it in another screen)
I had the same issue as you did when using this library (I'm assuming you are in fact using https://github.com/simonwhitaker/PyAPNs, which is what I'm using. There is at least one other lib out there with a similar name, but I don't think you'd be using that).
AFAIK when you're using the simple notification service the APNS server might hang up on you for reasons including: using an incorrect token, having a malformed request, etc. Or perhaps your connection might get broken if your network connection drops out or you. The PyAPNS code doesn't handle such a hangup very gracefully right now and it attempts to re-use the socket even after it has been closed. My experience with seeing the SSL3_WRITE_PENDING error was that I would always see an error such as "error: [Errno 110] Connection timed out" happen on the socket before I would then get SSL3_WRITE_PENDING error when PyAPNS tried to re-use the socket.
If you are seeing the server hangup on you and you want to know why it's doing that, it helps to use the enhanced version of APNS, so that the server will write back info about what you did wrong.
As it happens, there is currently a pull request (https://github.com/simonwhitaker/PyAPNs/pull/23/files) that both moves PyAPNS to use enhanced APNS AND handles disconnections more gracefully. You'll see I commented on that pull request and have created my own fork of PyAPNS that handles disconnections in the way that suited my use case the best.
So you can use the code from pull request to perhaps find out why the APNS server is hanging up on you. And / or you could use it to simplify your failure recovery so you just retry the send if an exception is thrown rather than have to re-create the APNS object.
Hopefully the pull request will be merged to master soon (possibly including my changes as well).

CherryPy (or other Python framework) with FastCGI on shared host

I am trying to configure the Python mini-framework CherryPy with FastCGI (actually fcgid) on Apache. I am on a shared host, so I don't have access to httpd.conf, just htaccess. I have followed these tutorials to no avail:
http://tools.cherrypy.org/wiki/FastCGIWSGI
http://tools.cherrypy.org/wiki/BluehostDeployment
I keep getting 500 errors w/ the Apache logs saying "Premature end of script headers". I have tried everything (permissions/shebangs/full-paths/deamonized/not-daimonized). I know Apache is correctly executing my .fcgi, because I am able to print to the error log from python, but that's it. Has anyone out there successfully installed CherryPy or any other framework on a shared host before? Your help would be greatly appreciated. Thanks.
Apache + Bluehost + fastcgi + cherrypy + wsgi is unfortunately a lot of pieces. I wish I had a year to write the Definitive Guide for you, but alas. You might gain some insight from the rather long mailing list thread which resulted in those links you posted.
An idea: make sure your .fcgi file has a reference to the correct python executable in the initial line:
#!/usr/bin/python
I had to get Django running with fcgi on Bluehost and apache using the wrong python environment was my problem (worked from the shell, but not from the web/apache).
Other than that, if you can print to the error log from your code, can you confirm that the your code is correctly executed, without any exceptions, when you access the web page? (not when running from the shell).
The Bluehost article has been the best resource, but I didn't carefully read the part about getting the latest patches (the beginning of step 3). At the time of the article, and even now with CherryPy version 3.1.2, you can't do 'dynamic mode' fcgi (when apache spawns the process). more here. Dynamic mode is basically essential if you are on a shared host.
I have checked out the trunk (3.2.0rc1), and after jumping through some hoops, got it to work. I followed step 5, method C in the bluehost article. Here was the stuff in the main of my cherryd.fcgi:
if __name__ == '__main__':
cherrypy.config.update({
'server.socket_port': None,
'server.socket_host': None,
'server.socket_file': None
})
start( daemonize=False, fastcgi=True, imports=["hello"])
Also, in cherrypy/process/servers.py, I had to change the following line:
# from this
# if not hasattr(socket.socket, 'fromfd'):
# to this
if not hasattr(socket, 'fromfd'):
So, it is possible to get it to work, but it feels kind of hacky. You should wait for the final release of version 3.2.0, or do what I did and check out Web.py. I was able to get it working with my shared host very easily (docs explain fastcgi/htaccess well).
In your webserver's log file, it should actually show what the output was that confused it. Are you sure you're looking in the error log as well as the access log?

Categories

Resources