This is my first time working with SimpleHTTPServer, and honestly my first time working with web servers in general, and I'm having a frustrating problem. I'll start up my server (via SSH) and then I'll go try to access it and everything will be fine. But I'll come back a few hours later and the server won't be running anymore. And by that point the SSH session has disconnected, so I can't see if there were any error messages. (Yes, I know I should use something like screen to save the shell messages -- trying that right now, but I need to wait for it to go down again.)
I thought it might just be that my code was throwing an exception, since I had no error handling, but I added what should be a pretty catch-all try/catch block, and I'm still experiencing the issue. (I feel like this is probably not the best method of error handling, but I'm new at this... so let me know if there's a better way to do this)
class MyRequestHandler(SimpleHTTPServer.SimpleHTTPRequestHandler):
# (this is the only function my request handler has)
def do_GET(self):
if 'search=' in self.path:
try:
# (my code that does stuff)
except Exception as e:
# (log the error to a file)
return
else:
SimpleHTTPServer.SimpleHTTPRequestHandler.do_GET(self)
Does anyone have any advice for things to check, or ways to diagnose the issue? Most likely, I guess, is that my code is just crashing somewhere else... but if there's anything in particular I should know about the way SimpleHTTPServer operates, let me know.
I never had SimpleHTTPServer running for an extended period of time usually I just use it to transfer a couple of files in an ad-hoc manner, but I guess that it wouldn't be so bad as long as your security restraints are elsewhere (ie firewall) and you don't have need for much scale.
The SSH session is ending, which is killing your tasks (both foreground and background tasks). There are two solutions to this:
Like you've already mentioned use a utility such as screen to prevent your session from ending.
If you really want this to run for an extended period of time, you should look into your operating system's documentation on how to start/stop/enable services (now-a-days most of the cool kids are using systemd, but you might also find yourself using SysVinit or some other init system)
EDIT:
This link is in the comments, but I thought I should put it here as it answers this question pretty well
Related
I am struggling to find a solution to this.
We use Python to monitor our Postgres databases that are running on AWS RDS. This means we have extremely limited control over the server side.
If there is an issue, and the server fails (hardware fault, network fault, you name it) our scripts just hang for sometimes 8-10 minutes (non-SSL) and up to 15-20 minutes (SSL connections). And by the time they recover, and finally reach whatever timeout this random amount of minutes is from, the server has failed over and everything works again.
Obviously, this renders our tools useless, if we can't catch these situations.
We basically run this (pseudo-code):
while True:
try:
query("select current_user")
except:
page("Database X failed")
For the basic use cases, this works just fine. E.g. if an instance is restarted, or something of the sort, no problems.
But, if there is an actual issue, the query just hangs. For minutes and minutes.
We've tried setting the statement_timeout on the psycopg2 connection. But that is a server setting, and if the instance fails and fails over, well, there is no server. So the client ends up waiting indefinitely or until it hits one of those arbitrary timeouts.
I've looked into sockets and tested something like this:
pg.connect('user', 'db-name', instanceName = 'foo')
fd = pg.Connection.fileno()
s = socket.socket(fileno=fd)
print(s)
s.settimeout(0.0000001)
timeval = struct.pack('ll', 0, 1)
s.setsockopt(socket.SOL_SOCKET, socket.SO_RCVTIMEO, timeval)
s.setsockopt(socket.SOL_SOCKET, socket.SO_SNDTIMEO, timeval)
data = pg.query("SELECT pg_sleep(10);")
In dumping the socket with the print(s) statement, I can clearly see that we've got the right socket.
But the timeouts I set, do nothing whatsoever.
I've tried many values, and it just has no effect. With the above, it should raise a timeout if more than 1 microsecond has elapsed. Ignoring common sense, I checked with tcpdump and made sure that we definitely do not get a response in 1 microsecond. Yet the thing just sits there and waits for to pg_sleep(10) to complete.
Could someone shed some light on this?
It seems simple enough:
All I want is that ANY CALL made to postgres can NEVER TAKE LONGER than, say, 10 seconds. Regardless of what it is, regardless of what happens. If more than 10 seconds have elapsed, it needs to raise an exception.
From what I can see, the only way would be to use subprocess with a process-timeout. But we run threaded (we monitor hundreds of instances and spawn a persistent connection in a thread for each instance) and I've seen posts saying that this isn't reliable inside threads. It also seems silly to have each thread spawn yet another subprocess. Inception comes to mind. Where does it end?
But I digress. The question seems simple enough, yet my wall is showing clear signs of a large dent developing.
Greatly appreciate any insights
PS: Python 3.6 on Ubuntu LTS
Cheers
Stefan
I have a Python program that opens a socket that communicates with a program on
a remote computer.
I want to check if the program on the remote computer was opened with admin permissions.
I have tried to look on-line, with no success.
This depends heavily on what the remote program is, and what other access you have to the server. The most reliable way would be to query the program so that it tells you its permissions - how to do this, and even whether it is actually possible, depends on what queries it supports and what responses it gives. If that isn't possible, you may be able to ask other processes on the server about it - for example, if you have shell access, you could try parsing the output of ps aux or locating information under /proc. But that has potential to be quite brittle, and also comes with a raft of security issues - you would give shell and possibly admin to every person who runs your script.
You should probably reconsider what you are trying to do with this information - there is probably a better way to solve the problem than inspecting the privileges of remote services. You've said that the process is an rpyc instance that gives you access to some Python module. Presumably, some of its functions are unavailable if it isn't running with enough permissions - in this case, the most Pythonic solution is to do exactly what you would do were it a local script: try to do whatever it is you need to do, and be prepared to handle a failure. This would usually involve a try/except block like this:
try:
privileged_operation()
except PermissionError:
# Handle the problem
You would need to consult the module's documentation, or play around with it a bit, to find out the exact error that it gives you here.
I need you guys :D
I have a web page, on this page I have check some items and pass their value as variable to python script.
problem is:
I Need to write a python script and in that script I need to put this variables into my predefined shell commands and run them.
It is one gnuplot and one other shell commands.
I never do anything in python can you guys send me some advices ?
THx
I can't fully address your questions due to lack of information on the web framework that you are using but here are some advice and guidance that you will find useful. I did had a similar problem that will require me to run a shell program that pass arguments derived from user requests( i was using the django framework ( python ) )
Now there are several factors that you have to consider
How long will each job takes
What is the load that you are expecting (are there going to be loads of jobs)
Will there be any side effects from your shell command
Here are some explanation that why this will be important
How long will each job takes.
Depending on your framework and browser, there is a limitation on the duration that a connection to the server is kept alive. In other words, you will have to take into consideration that the time for the server to response to a user request do not exceed the connection time out set by the server or the browser. If it takes too long, then you will get a server connection time out. Ie you will get an error response as there is no response from the server side.
What is the load that you are expecting.
You will have probably figure that if a work that you are requesting is huge,it will take out more resources than you will need. Also, if you have multiple requests at the same time, it will take a huge toll on your server. For instance, if you do proceed with using subprocess for your jobs, it will be important to note if you job is blocking or non blocking.
Side effects.
It is important to understand what are the side effects of your shell process. For instance, if your shell process involves writing and generating lots of temp files, you will then have to consider the permissions that your script have. It is a complex task.
So how can this be resolve!
subprocesswhich ship with base python will allow you to run shell commands using python. If you want more sophisticated tools check out the fabric library. For passing of arguments do check out optparse and sys.argv
If you expect a huge work load or a long processing time, do consider setting up a queue system for your jobs. Popular framework like celery is a good example. You may look at gevent and asyncio( python 3) as well. Generally, instead of returning a response on the fly, you can retur a job id or a url in which the user can come back later on and have a look
Point to note!
Permission and security is vital! The last thing you want is for people to execute shell command that will be detrimental to your system
You can also increase connection timeout depending on the framework that you are using.
I hope you will find this useful
Cheers,
Biobirdman
I am trying to do some logging in Django (mod_wsgi) of a view. I however want to do this so that the client is not held up similar to the perlcleanuphandler phase available in mod_perl. Notice the line "It is used to execute some code immediately after the request has been served (the client went away)". This is exactly what I want.
I want to client to be serviced and then I want to do the logging. Is there a good insertion point for the code in mod_wsgi or Django ? I looked into suggestions here and here. However, in both cases when I put a simple time.sleep(10) and do a curl/wget on the url, the curl doesn't return for 10 secs.
I even tried to put the time.sleep in __del__ method in the HttpResponse Object as suggested in one of the comments, but still no dice.
I am aware that I can probably put the logging data onto a queue and do some backgroud processing to store the logs, but I would like to avoid that approach if there is an other simpler/easier approach.
Any suggestions ?
See documentation at:
http://code.google.com/p/modwsgi/wiki/RegisteringCleanupCode
for a WSGI specific (not mod_wsgi specific) way.
Django as pointed out by others may have its own ways of doing things as well, although whether it is fired after all the response content is written back to the client I don't know.
Im using Pylons in combination with WMI module to do some basic system monitoring of a couple of machines, for POSIX based systems everything is simple - for Windows - not so much.
Doing a request to the Pylons server to get current CPU, however it's not working well, or atleast with the WMI module. First i simply did (something) this:
c = wmi.WMI()
for cpu in c.Win32_Processor():
value = cpu.LoadPercentage
However, that gave me an error when accessing this module via Pylons (GET http://ip:port/cpu):
raise x_wmi_uninitialised_thread ("WMI returned a syntax error: you're probably running inside a thread without first calling pythoncom.CoInitialize[Ex]")
x_wmi_uninitialised_thread: <x_wmi: WMI returned a syntax error: you're probably running inside a thread without first calling pythoncom.CoInitialize[Ex] (no underlying exception)>
Looking at http://timgolden.me.uk/python/wmi/tutorial.html, i wrapped the code accordingly to the example under the topic "CoInitialize & CoUninitialize", which makes the code work, but it keeps throwing "Win32 exception occurred releasing IUnknown at..."
And then looking at http://mail.python.org/pipermail/python-win32/2007-August/006237.html and the follow up post, trying to follow that - however pythoncom._GetInterfaceCount() is always 20.
Im guessing this is someway related to Pylons spawning worker threads and crap like that, however im kinda lost here, advice would be nice.
Thanks in advance,
Anders
EDIT: If you are doing something similar, don't bother with the WMI module, simply use http://msdn.microsoft.com/en-us/library/aa394531%28VS.85%29.aspx , and you don't have to worry about threads crap like this.
Add "sys.coinit_flags = 0" after your "import sys" line and before the "import pythoncom" line. That worked for me, although I don't know why.
To me it sounds like Windows is not enjoying the way you are doing this kind of work on what are probably temporary worker threads (as you point out).
If this is the case, and you can't get things to work, one possible workaround would be to re-factor your application slightly so that there is a service thread running at all times which you can query for this information rather than setting everything up and asking for it on demand. It might not even need to be a thread, perhaps just a utility class instance which you get set up when the application starts, protected with a lock to prevent concurrent access.