To start off, this desktop app is really to give myself an excuse to learn python and how a gui works.
Im trying to help my clients visualize how much bandwidth they are going through, when its happening and where their visitors are. All of this would be displayed by graphs or whatever would be most convienient. (Down the road, I'd like to add cpu/mem usage)
I was thinking the easiest way would be for the app to connect via sftp, download the specified log and then use regex to filter the necessary information.
I was thinking of using :
Python 2.6
Pyside
Paramiko
to start out with. I was looking at twisted for the sftp part but I though maybe keeping it simple for now would be a better choice.
Does this seem right? Should I be trying to use sftp? Or should I try to interact with some subdomain from my site to push the logs to the client? (i.e app.mysite.com)
How about regular expressions to parse the logs?
sftp or shelling out to rsync seems like a reasonable way to retrieve the logs. As for parsing them, regular expressions are what most people tend to use. However, there are other approaches, too. For instance:
Parse Apache logs to SQLite database
Using pyparsing to parse logs. This one is parsing a different kind of log file, but the approach is still interesting.
Parsing Apache access logs with Python. The author actually wrote a little parser, which is available in an apachelogs module.
You get the idea.
Related
I would like to know what is the fastest way to turn a simple Python script into a basic web app.
For example, say I would like to create a web app that takes a keyword from the user and display the most retweeted tweet on Twitter. If I write a python script that is capable of performing that task using Twitter's API, how would I go about turning it into a web app for people to access?
I have looked at frameworks such as Django, but it would take me weeks or months to learn how to use it. I just need something quick and simple. Any such alternatives?
Make a CGI script out of it. You basically get the request information from the webserver via environment variables and you print the desired HTML to stdout. There are helper libraries such as Werkzeug which help with abstracting away the handling of the environment variables by wrapping them in a Request object.
This technique is quite outdated and isn't normally used nowadays as the script has to be run on every request and thus incurs the startup cost all the time.
Nevertheless this may actually be a good solution for you because it is quick and every webserver supports it.
I want to use server-side includes so I can include header and footer files on my personal portfolio. I've been using the Python SimpleHTTPServer because I had the command handy, so I know how to run it.
My server-side includes don't currently work. My understanding is that, based on this article, I would need to configure my SimpleHTTPServer to allow for server-side includes. I haven't been able to find this information anywhere, so I'm thinking that I need to I use a different web server. Can someone clarify?
If I have to use a different web server, I'd love to hear any suggestions. I'm a noob so something simple would be great. Also, it'd be helpful if you could provide: (1) any good tutorials for making any of the necessary config changes so I can run SSI; (2) the command I run to start the server (so I can make an alias). I looked briefly at Apache stuff, but seems very intimidating. I'm wondering if there is a more noob-friendly way. I'm trying to build a personal portfolio, not do anything crazy.
Thanks for your help!
I found ssi-server by Googling for "ssi python". It says it provides "Server Side Includes in Python's SimpleHTTPServer" and looks like it might work for you.
Do you have to use Server Side Includes? Since you're using python, there are lot of options for doing this kind of functionality. For example, you could just use python's string manipulation to join the files together (e.g. with the str.format method). Or you could use a templating language like jinja2, mako, etc.
What SSI features are you depending on? Or do you have some existing SSI files that you want to use as-is?
I'm writing some scripts for our sales people to query an index with elastic search through python. (Eventually the script will update lead info in our Salesforce DB.)
I have been using the urllib2 module, with simplejson, to pull results. The problem is that this seems to be a not-so-good approach, evidenced by scripts which are taking longer and longer to run.
Questions:
Does anyone have any opinions (opinions, on the internet???) about Elastic Search clients for Python? Specifically, I've found pyes and pyelasticsearch, via elasticsearch.org---how do these two stack up?
How good or bad is my current approach of dynamically building the query and running it via self.raw_results = simplejson.load(urllib2.urlopen(self.query))?
Any advice is greatly appreciated!
We use pyes. And its pretty neat. You can there go with the thrift protocol which is faster then the rest service.
It sounds like you have an issue unrelated to the client. If you can pare down what's being sent to ES and represent it in a simple curl command it will make what's actually running slowly more apparent. I suspect we just need to tweak your query to make sure it's optimal for your context.
Im looking to optimize our translation workflow for a django/python based project.
Currently we run a command to export our gettext files, send it to the translators, receive it back. Well you get the drill.
What in your opinion is the best way to optimize this workflow. Tools which integrate nicely and allow translations to be pushed and pulled from and to the system?
Options i've seen so far:
http://trac.transifex.org/ (supported in django 1.3)
Transifex was designed for pretty much this. It doesn't pull the strings from the project/app automatically yet, but it can be extended to do so if desired.
Transifex has two ways to automate this: If your POT file is on a public server, you can setup a resource to auto-fetch the POT file frequently and update your resource.
The second option is to use the client app and run it every time you build/deploy/commit.
I know that with the SimpleHTTPServer I can make my directories accessible by web-browsers via Internet. So, I run just one line of the code and, as a result, another person working on another computer can use his/her browser to see content of my directories.
But I wander if I can make more complicated things. For example, somebody uses his/her browser to load my Python program with a set of parameter (example.py?x=2&y=2) and, as a result, he/she sees the HTML page generated by the Python program (not the Python program).
I also wander if I can process html form submitted to the SimpleHTTPServer.
While it is possible, you have to do pretty much everything yourself (parsing request parameters, handle routing, etc).
If you are not looking to get experience in creating web-frameworks, but just want to create a small site you should probably use a minimalistic framework instead.
Try Bottle, a simple single-file web framework: http://bottlepy.org
Maybe the VerseMatch project and related recipes over at ActiveState is something you would be interested in examining? It implements a small application using the standard library for dynamic running.
have you considered using CGIHTTPServer instead of SimpleHTTPServer? Then you can toss your scripts in cgi-bin and they'll execute. You have to include content-type header and whatnot but if you're looking for quick and dirty it's real convenient