Server alarm bell - python

I would to implement a simple functionality on my server:
I would to know when a my process that it is running on the server is active or no.
If this process is not running I would to send an e-mail to administrator.
To implement this functionality I'm thinking to a cron-job scheduled on x hours.
How can I implement this cron-job? I used python server-side.
Is this the better solution?

I'm assuming that your server is *nix, as you're mentioning cronjobs.
You can find bash scripts doing exactly this functionality, if you google it. Here's a bash script which checks for specific services and sends emails if a service is down:
http://bash.cyberciti.biz/monitoring/simple-process-checker-script/
You just need to tweak the _chklist, _sendemail and _email variables to suit your needs

Related

Is there a way to send ISPF commands and get job statistics using python script on Mainframe?

I'm trying to automate some manual tasks on Mainframe using python script and to do that I need job status.. I know there is FTP library to login mainframe but I'm not able to send commands and get job statistics.. please suggest if there is any documentation..
Thanks in advance!
Not sure exactly what you're after concerning "job statistics", but there are a set of APIs provided by z/OSMF that can be invoked from any REST requester. A jobs interface is included. Docs on these APIs are found here: https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.izua700/IZUHPINFO_RESTServices.htm
z/OSMF must be installed on your z/OS system before using this; it's not always there. Your systems programmer should know whether it's up, running and usable and whether you would have authority to use those services.

Executing a command on a remote server with decoupling, redundancy, and asynchronous

I have a few servers that require executing commands on other servers. For example a Bitbucket Server post receive hook executing a git pull on another server. Another example is the CI server pulling a new docker image and restarting an instance on another server.
I would normally use ssh for this, creating a user/group specifically for the job with limited permission.
A few downsides with ssh:
Synchronous ssh call means a git push will have to wait until complete.
If a host is not contactable for whatever reason, the ssh command will fail.
Maintaining keys, users, and sudoers permissions can become unwieldy.
Few possibilities:
Find an open source out of the box solution (I have tried with no luck so far)
Set up an REST API on each server that accepts calls with some type of authentication, e.g. POST https://server/git/pull/?apikey=a1b2c3
Set up Python/Celery to execute tasks on a different queue for each host. This means a celery worker on each server that can execute commands and possibly a service that accepts REST API calls, converting them to Celery tasks.
Is there a nice solution to this problem?
Defining the problem
You want to be able to trigger a remote task without waiting for it to complete.
This can be achieved in any number of ways, including with SSH. You can execute a remote command without waiting for it to complete by closing or redirecting all I/O streams, e.g. like this:
ssh user#host "/usr/bin/foobar </dev/null >/dev/null 2>&1"
You want to be able to defer the task if the host is currently unavailable.
This requires a queuing/retry system of some kind. You will also need to decide whether the target hosts will be querying for messages ("pull") or whether messages will be sent to the target hosts from elsewhere ("push").
You want to simplify access control as much as possible.
There's no way to completely avoid this issue. One solution would be to put most of the authentication logic in a centralized task server. This splits the problem into two parts: configuring access rights in the task server, and configuring authentication between the task server and the target hosts.
Example solutions
Hosts attempt to start tasks over SSH using method above for asynchrony. If host is unavailable, task is written to local file. Cron job periodically retries sending failed tasks. Access control via SSH keys.
Hosts add tasks by writing commands to files on an SFTP server. Cron job on target hosts periodically checks for new commands and executes them if found. Access control managed via SSH keys on the SFTP server.
Hosts post tasks to REST API which adds them to queue. Celery daemon on each target host consumes from queue and executes tasks. Access managed primarily by credentials sent to the task queuing server.
Hosts post tasks to API which adds tasks to queue. Task consumer nodes pull tasks off the queue and send requests to API on target hosts. Authentication managed by cryptographic signature of sender appended to request, verified by task server on target host.
You can also look into tools that do some or all of the required functions out of the box. For example, some Google searching came up with Rundeck which seems to have some job scheduling capabilities and a REST API. You should also consider whether you can leverage any existing automated deployment or management tools already present in your system.
Conclusions
Ultimately, there's no single right answer to this question. It really depends on your particular needs. Ask yourself: How much time and effort do you want to spend creating this system? What about maintenance? How reliable does it need to be? How much does it need to scale? And so on, ad infinitum...

Running web.py as a service on linux

I've used web.py to create a web service that returns results in json.
I run it on my local box as python scriptname.py 8888
However, I now want to run it on a linux box.
How can I run it as a service on the linux box?
update
After the answers it seems like the question isn't right. I am aware of the deployment process, frameworks, and the webserver. Maybe the following back story will help:
I had a small python script that takes as input a file and based on some logic splits that file up. I wanted to use this script with a web front end I already have in place (Grails). I wanted to call this from the grails application but did not want to do it by executing a command line. So I wrapped the python script as a webservice. which takes in two parameters and returns, in json, the number of split files. This webservice will ONLY be used by my grails front end and nothing else.
So, I simply wish to run this little web.py service so that it can respond to my grails front end.
Please correct me if I'm wrong, but would I still need ngix and the like after the above? This script sounds trivial but eventually i will be adding more logic to it so I wanted it as a webservice which can be consumed by a web front end.
In general, there are two parts of this.
The "remote and event-based" part: Service used remotely over network needs certain set of skills: to be able to accept (multiple) connections, read requests, process, reply, speak at least basic TCP/HTTP, handle dead connections, and if it's more than small private LAN, it needs to be robust (think DoS) and maybe also perform some kind of authentication.
If your script is willing to take care of all of this, then it's ready to open its own port and listen. I'm not sure if web.py provides all of these facilities.
Then there's the other part, "daemonization", when you want to run the server unattended: running at boot, running under the right user, not blocking your parent (ssh, init script or whatever), not having ttys open but maybe logging somewhere...
Servers like nginx and Apache are built for this, and provide interfaces like mod_python or WSGI, so that much simpler applications can give up as much of the above as possible.
So the answer would be: yes, you still need Nginx or the likes, unless:
you can implement it yourself in Python,
or you are using the script on localhost only and are willing to take some
risks of instability.
Then probably you can do on your own.
try this
python scriptname.py 8888 2>/dev/null
it will run as daemon

Best way for client to fire off separate process without blocking client/server communication

The end result I am trying to achieve is allow a server to assign specific tasks to a client when it makes it's connection. A simplified version would be like this
Client connects to Server
Server tells Client to run some network task
Client receives task and fires up another process to complete task
Client tells Server it has started
Server tells Client it has another task to do (and so on...)
A couple of notes
There would be a cap on how many tasks a client can do
The client would need to be able to monitor the task/process (running? died?)
It would be nice if the client could receive data back from the process to send to the server if needed
At first, I was going to try threading, but I have heard python doesn't do threading correctly (is that right/wrong?)
Then it was thought to fire of a system call from python and record the PID. Then send certain signals to it for status, stop, (SIGUSR1, SIGUSR2, SIGINT). But not sure if that will work, because I don't know if I can capture data from another process. If you can, I don't have a clue how that would be accomplished. (stdout or a socket file?)
What would you guys suggest as far as the best way to handle this?
Use spawnProcess to spawn a subprocess. If you're using Twisted already, then this should integrate pretty seamlessly into your existing protocol logic.
Use Celery, a Python distributed task queue. It probably does everything you want or can be made to do everything you want, and it will also handle a ton of edge cases you might not have considered yet (what happens to existing jobs if the server crashes, etc.)
You can communicate with Celery from your other software using a messaging queue like RabbitMQ; see the Celery tutorials for details on this.
It will probably be most convenient to use a database such as MySQL or PostgreSQL to store information about tasks and their results, but you may be able to engineer a solution that doesn't use a database if you prefer.

Decentralized networking in Python - How?

I want to write a Python script that will check the users local network for other instances of the script currently running.
For the purposes of this question, let's say that I'm writing an application that runs solely via the command line, and will just update the screen when another instance of the application is "found" on the local network. Sample output below:
$ python question.py
Thanks for running ThisApp! You are 192.168.1.101.
Found 192.168.1.102 running this application.
Found 192.168.1.104 running this application.
What libraries/projects exist to help facilitate something like this?
One of the ways to do this would be the Application under question is broadcasting UDP packets and your application is receiving that from different nodes and then displaying it. Twisted Networking Framework provides facilities for doing such a job. The documentation provides some simple examples too.
Well, you could write something using the socket module. You would have to have two programs though, a server on the users local computer, and then a client program that would interface with the server. The server would also use the select module to listen for multiple connections. You would then have a client program that sends something to the server when it is run, or whenever you want it to. The server could then print out which connections it is maintaining, including the details such as IP address.
This is documented extremely well at this link, more so than you need but it will explain it to you as it did to me. http://ilab.cs.byu.edu/python/
You can try broadcast UDP, I found some example here: http://vizible.wordpress.com/2009/01/31/python-broadcast-udp/
You can have a server-based solution: a central server where clients register themselves, and query for other clients being registered. A server framework like Twisted can help here.
In a peer-to-peer setting, push technologies like UDP broadcasts can be used, where each client is putting out a heartbeat packet ever so often on the network, for others to receive. Basic modules like socket would help with that.
Alternatively, you could go for a pull approach, where the interesting peer would need to discover the others actively. This is probably the least straight-forward. For one, you need to scan the network, i.e. find out which IPs belong to the local network and go through them. Then you would need to contact each IP in turn. If your program opens a TCP port, you could try to connect to this and find out your program is running there. If you want your program to be completely ignorant of these queries, you might need to open an ssh connection to the remote IP and scan the process list for your program. All this might involve various modules and libraries. One you might want to look at is execnet.

Categories

Resources