Scripting library for monitoring server health? - python

Is there a scripting library preferably in Python/Perl/Ruby that allows you to get information on disk, load, a list of processes running, cpu usage in a standard way?
I always end up parsing df, uptime, ps etc. Given that these differ on different Unix flavors and need to be done in a totally different way on Windows, I would have thought that someone would have already done this.

Most simple is monit: http://mmonit.com/monit/
A step up, as #lawrencealan mentioned, is Nagios: http://nagios.org/
And here's a new interesting effort: http://amon.cx/

(ruby) Daniel Berger maintains a lot of gems in this field. Look for sys-cpu, sys-uptime, sys-uname, sys-proctable, sys-host, sys-admin, sys-filesystem - all multi-platform AFAIK.

Have you looked into Nagios? http://nagios.org/
There are an abundance of agents: http://exchange.nagios.org/directory/Addons/Monitoring-Agents

Related

Python3 Search the virtual memory of a running windows process

begin TLDR;
I want to write a python3 script to scan through the memory of a running windows process and find strings.
end TLDR;
This is for a CTF binary. It's a typical Windows x86 PE file. The goal is simply to get a flag from the processes memory as it runs. This is easy with ProcessHacker you can search through the strings in the memory of the running application and find the flag with a regex. Now because I'm a masochistic geek I strive to script out solutions for CTFs (for everything really). Specifically I want to use python3, C# is also an option but would really like to keep all of the solution scripts in python.
Thought this would be a very simple task. You know... pip install some library written by someone that's already solved the problem and use it. Couldn't find anything that would let me do what I need for this task. Here are the libraries I tried out already.
ctypes - This was the first one I used, specifically ReadProcessMemory. Kept getting 299 errors which was because the buffer I was passing in was larger than that section of memory so I made a recursive function that would catch that exception, divide the buffer length by 2 until it got something THEN would read one byte at a time until it hit a 299 error. May have been on the right track there but I wasn't able to get the flag. I WAS able to find the flag only if I knew the exact address of the flag (which I'd get from process hacker). I may make a separate question on SO to address that, this one is really just me asking the community if something already exists before diving into this.
pymem - A nice wrapper for ctypes but had the same issues as above.
winappdbg - python2.x only. I don't want to use python 2.x.
haystack - Looks like this depends on winappdbg which depends on python 2.x.
angr - This is a possibility, Only scratched the surface with it so far. Looks complicated and it's on the to learn list but don't want to dive into something right now that's not going to solve the issue.
volatility - Looks like this is meant for working with full RAM dumps not for hooking into currently running processes and reading the memory.
My plan at the moment is to dive a bit more into angr to see if that will work, go back to pymem/ctypes and try more things. If all else fails ProcessHacker IS opensource. I'm not fluent in C so it'll take time to figure out how they're doing it. Really hoping there's some python3 library I'm missing or maybe I'm going about this the wrong way.
Ended up writing the script using the frida library. Also have to give soutz to rootbsd because his or her code in the fridump3 project helped greatly.

python: how to check the use of an external program

In Python, how do you check that an external program is running? I'd like to track my use of some programs, so I can see the amount of time I've spent with them. For example, if I launch my program , I want to be able to see if Chrome has already been launched, and if so, start a timer which would end when I exit Chrome.
Ive seen that then subprocess module can launch external programs, but this is not what I'm looking for.
Thanks in advance.
You are looking for psutil
It is great to get information on the system (CPU / RAM / HD / ...)
And in your case, processes : https://pythonhosted.org/psutil/#processes
Obtaining information on running processes in general depends on the operating system you are using. The Python standard library does not contain a platform-independent way of obtaining this information. There are, however, third-party libraries for this purpose, e.g. psutil.
In my case I would try something using Task Manager data, probably using subprocess.check_output(ps)(for me that looks good), but you can the [psutil][1] library.
Tell us what you did later :)

Python GPIB commands

I have a working GPIB interface and Linux-GPIB package installed and working.
I only know two commands at the moment, x.write and x.find. I don't know much about Python, but I recognize the dot operator and realize that after importing gpib, I should get some functions at my disposal.
I have not been able to locate the list of GPIB functions.
They are in the gpib library. You reference them like so: gpib.foo().
Add this line into your code:
help(gpib)
And browse through the functions/classes.
If you are working in Python, I think the pyvisa is what you are looking for. It provides lots of useful high level functions which helps you to send a series of SCPI commands to your equipment via GPIB, such as write, read,ask and so on.
As for SCPI commands themselves, usually they will differ from the different vendors. So in terms of what kind of SCPI you should send to the equipment, you should read the corresponding datasheet. But in the other case, you could have installed the drivers which were provided by the vendor. In this case you can send some even higher commands. For instance, if you would like to control a voltage source, they have probably already got the function setvoltage(double voltage). Things will be much more easier for you.
Actually there are many commands available. Except those two you mentioned, there are x.read, x.ask, x.ask_for_value and so on.
But I recommend your to read those help file, I think that will give you a better understanding.

Is there a Perl alternative to YSlow?

I'd like to have a tool in Perl to gather useful statistics for page loads (eg, download time/speed, CDN information, headers, dns lookups, compressions)
Does anyone know if one exists or if there's a place to learn about how to make one?
You might want to try WWW::Mechanize::Timed, which extends the WWW::Mechanize module. The ::Timed features will allow you to collect information on how long your requests take. The underlying ::Mechanize module, which is itself a subclass of LWP::UserAgent, would give you access to your response, including headers, body content, and images. From these you could compute total page "weight", number of requests, etc. This doesn't cover everything YSlow does (exposing the DNS internals underlying gethostbyname would be a good trick!) but I hope it's a place to start, if I've understood your question properly.
You could have the perl CGI (or any perl program) run a few times under the profiler, and scan for commonalities. I haven't seen a web-based interface like this, but if you have control over the perl side of things, the documentation is here:
http://www.perl.com/pub/a/2004/06/25/profiling.html
It basically boils down to running your perl program with -d:DProf and then, after it finishes, running dprofpp in the same directory:
# perl -d:DProf ./foo.pl
# dprofpp
Update:
Yes, this is not the same thing as protocol profiling, as duly noted below, but there aren't may alternatives for perl. If you are trying to find where the perl portion of the slowness is coming from, profiling perl is a good place to start. Products like YSlow will track the pure protocol aspects of it, whether the CGI is perl or php or python.
Personally, I use it to profile my django site, which is in python and flash, and I profile those separately from the protocol portion of the system, which I also use YSlow for.
Also, there are perl plugins for "ddd" which will at least make it graphical:
http://www.gnu.org/software/ddd/
Sorry if this doesn't solve the exact problem, I would like to know if there's a perl interface to collate this as well, but I know this is where I would start looking...

Does anybody use DjVu files in their production tools?

When it's about archiving and doc portability, it's all about PDF. I heard about DjVu somes years ago, and it seems to be now mature enough for serious usages. The benefits seems to be a small size format and a fast open / read experience.
But I have absolutely no feedback on how good / bad it is in the real world :
Is it technically hard to implement in traditional information management tools ?
Is is worth learning / implementing solution to generate / parse it when you now PDF ?
Is the final user feedback good when it comes to day to day use ?
How do you manage exchanges with the external world (the one with a PDF only state of mind) ?
As a programmer, what are the pro and cons ?
And what would you use to convince your boss to (or not to) use DjVU ?
And globally, what gain did you noticed after including DjVu in your workflow ?
Bonus question : do you know some good Python libs to hack some quick and dirty scripts as a begining ?
EDIT : doing some research, I ended up getting that Wikimedia use it to internally store its book collection but can't find any feedback about it. Anybody involved in that project around here ?
I've found DjVu to be ideal for image-intensive documents. I used to sell books of highly details maps, and those were always in DjVu. PDF however works really well; it's a standard, and -everybody- will be able to open it without installing additional software.
There's more info at:
http://print-driver.com/news/pdf-vs-djvu-i1909.html
Personally, I'd say until its graphic-rich documents, just stick to PDF.

Categories

Resources