I have a problem with checking my service on other windows or Linux servers.
My problem is that I have to make a request from one server to the other servers and check if the vital services of those servers are active or disabled.
I wrote Python code to check for services, which only works on a local system.
import psutil
def getService(name):
service = None
try:
service = psutil.win_service_get(name)
service = service.as_dict()
except Exception as ex:
print(str(ex))
return service
service = getService('LanmanServer')
if service:
print("service found")
else:
print("service not found")
if service and service['status'] == 'running':
print("service is running")
else:
print("service is not running")
Does this code have this feature?
Or suggest another code؟
I have reviewed suggestions such as using server agents (influx, ...), which are not working for my needs.
You can use the following code for your service. i think these codes will help you
in your problem.
ip = your_ip
server_user = your_serviceuser
server_pass = your_pass
command = f"net use \\\\{ip} {server_pass} /USER:{server_user}"
os.system(command)
command = f"SC \\\\{ip} query SQLSERVERAGENT"
process = subprocess.Popen(command, stdout=subprocess.PIPE)
output, err = process.communicate()
output = str(str(str(str(output)[2:-1].replace(' ', '')).replace('\\t', '')).replace('\\r', '')).split('\\n')
if output[3] != 'STATE:4RUNNING':
print("service is running...")
As far as I know, psutil can only be used for gathering information about local processes, and is not suitable for retrieving information about processes running on other hosts. If you want to check whether or not a process is running on another host, there are many ways to approach this problem, and the solution depends on how deep you want to go (or need to go), and what your local situation is. From the top of my head, here are some ideas:
If you are only dealing with network services with exposed ports:
A very simple solution would involve using a script and a port scanner (nmap); if a port that a service is listening behind, is open, then we can assume that the service is running. Run the script every once in a while to check up on the services, and do your thing.
If you want to stay in Python, you can achieve the same end result by using Python's socket module to try and connect to a given host and port to determine whether or not the port that a service is listening behind, is open.
A Python package or tool for monitoring network services on other hosts like this probably already exists.
If you want more information and need to go deeper, or you want to check up on local services, your solution will have to involve a local monitor process on each host, and connecting to that process to gather information.
You can use your code to implement a server that lets clients connect to it, to check up on the services running on that host. (Check the socket module's official documentation for examples on how to implement clients and servers.)
Here's the big thing though. Based on your question and how it was asked, I would assume that you do not have the experience nor the insight to implement this in a secure way yet. If you're using this for a simple hobby/student project, roll out your own solution, and learn. Otherwise, I would recommend that you check out an existing solution like Nagios, and follow the security recommendations very closely.
Related
I have a python application("App1") that uses serial port /dev/ttyUSB0. This application is running as Linux service. It is running very well as it is perfectly automated for the task that I need it to perform. However, I have recently came to realize that sometimes I accidentally use the same serial port for another python application that I am developing which causes unwanted interference with "App1".
I did try to lock down "App1" as follows:
ser=serial.Serial(PORT, BAUDRATE)
fcntl.lockf(ser, fcntl.LOCK_EX | fcntl.LOCK_NB)
However, for other applications I sometimes unknowingly using
ser=serial.Serial(PORT, BAUDRATE)
without checking for ser.isOpen()
In order to prevent this I was wondering during the times I work on other applications, is there a way for ser=serial.Serial(PORT, BAUDRATE) to notify me that the serial port is already in use when I try to access it?
A solution I came up with is to create a cronjob that runs forever which essentially checks the following:
fuser -k /dev/ttyUSB0 #to get the PID of activated services that uses /dev/ttyUSB0
pkill -f <PID of second application shown in output above> #kill the application belonging to the second PID given by the above command
The above would ensure that whenever two applications use the same serial port, the second PID will get killed(I am aware there are some leaks in this logic). What do you guys think? If this is not a good solution, is there any way for ser=serial.Serial(PORT, BAUDRATE) to notify me that /dev/ttyUSB0 is already in use when I try to access it or do I need to implement the logic at driver level? Any help would be highly appreciated!
I think the simpler thing to do there is to create a different user for the "stable process", and use plain file permissions to give that user exclusive access to /dev/ttyUSB0.
With Unix groups and still plain permissions, that other process can still access any other resource it needs that would be under your main user - so it should be plain simple
If you are not aware of them, check the documentation for the commands chown and chmod.
I am using the googleapiclient python api to start a vm, and then paramiko to connect to it via ssh.
I use googleapiclient.discovery to get the GCE api
compute = googleapiclient.discovery.build('compute', 'v1')
I start my vm using the start api call
req = compute.instances().start(project, zone, instance)
resp = request.execute()
while resp['status'] != 'DONE':
time.sleep(1)
resp = req.execute()
I then perform a get request to find the vm details, and in turn the ephemeral external ip address
req = compute.instances().get(project, zone, instance)
info = req.execute()
ip_address = info['networkInterfaces'][0]['accessConfigs'][0]['natIP']
Finally, I use paramiko to connect to this ip address.
ssh_client = paramiko.SSHClient()
ssh_client.connect(ip_address)
Non-deterministically, the connect call fails:
.../lib/python3.6/site-packages/paramiko/client.py", line 362, in connect
raise NoValidConnectionsError(errors)
paramiko.ssh_exception.NoValidConnections Error:
[Errno None] Unable to connect to port 22 on xxx.xxx.xxx.xxx
It seems to be timing related, as putting in a time.sleep(5) before the ssh_client.connect call has preventing this error.
I'm assuming this allows sufficient time for sshd to start accepting connections, but I'm not certain.
Putting sleeps in my code is uber hacky, so I'd much prefer to find a way to deterministically wait until the ssh daemon is running and available for me to connect to it (if that is indeed the cause of the NoValidConnections exception)
Is there a way to instruct the GCE api to only return from start when the VM is running and sshd is available for me to connect to?
Is there a way to request this information using the GCE api?
Alternately I see paramiko has a timeout option in the connect call - should I just change my 5 second sleep to a 5 second timeout?
There’s no way for GCE to know if the guest is SSH-able. (For instance, imagine a case where the guest uses a nonstandard method for allowing remote connections, so even checking sshd wouldn’t work. Even if you could rely on sshd, the way to check that it’s running depends on its version, host OS, configuration, etc.) GCE only knows hardware-level information about the VM, such as whether it rebooted.
To solve your problem, I would try the timeout mechanism in paramiko like you described, or maybe retry the connection attempt in a loop with a timeout since paramiko might not implement a full-state-reset retry internally (just speculating, I’m not sure).
Also, I think 5 seconds may be a little low — it’s probably fine for average response time, but outliers will be slower, which could cause your connection attempts to be flaky. Maybe bump that to 30 seconds or a minute just to be totally safe.
I'm trying to develop a server script using python 3.4 that runs perpetually and responds to client requests on up to 5 separate ports. My preferred platform is Debian 8.0. The platform currently runs on a virtual machine in the cloud. My script works fine when I run it off the command line - I need to now (1) keep it running once I log off the server and (2) keep several ports open through the script so that a windows client can connect to them.
For (1),
After trying several options [I tried using upstart, added the script to rc.local, used nohup with & to run it off the terminal, etc] that didn't seem to work, I eventually found something that does seem to keep the script running, even if it's not very elegant - I wrote an hourly cron script that checks to see if the script is running in the process list, and if not, to execute it.
Whenever I login to the VM now, I see the following output when I type 'ps -ef':
root 22007 21992 98 Nov10 14-12:52:59 /usr/bin/python3.4 /home/userxyz/cronserver.py
I assume that the script is running based on the fact that there is an active process in the system. I mention this part because I suspect that there could be a correlation with part (2) of my issue.
For (2),
The script is supposed to open ports 49100 - 49105 and listen for connection requests, etc. When I run the script from the terminal, zenmap from my client machine verifies that these ports are open. However, when the cron job initiates the script, these ports don't seem to stay open. My windows client program can't connect to the script either.
The python code I use for listening to a port:
f = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
f.bind((serviceIP, 49101))
f.listen(5)
while True:
scName, address = f.accept()
[code to handle request]
scName.shutdown(socket.SHUT_WR)
scName.close()
Any insight or assistance would be greatly appreciated!
What you ask is not easy because it depends on a variety of factors:
What is the frequency of the data received?
How many clients are expected to connect to this server?
Is there a chance two clients try to connect at the same time?
How long it takes to handle some received data?
What do you need to do with your data?
Write to a database?
Write to a file?
Calculate something?
Etc.
Depending on your answer you'll have some design decisions to make for your solution.
But since you need an answer, here's a hack that represent a way to do things:
import socketserver
import threading
import datetime
class SleepyGaryReceptionHandler(socketserver.BaseRequestHandler):
log_file_name = "/tmp/sleepygaryserver.log"
def handle(self):
# self.request is defined in BaseRequestHandler
data_received = self.request.recv(1024)
# self.client_address is also defined in BaseRequestHandler
sender_address = self.client_address[0]
# This is where you are supposed to do something with your data
# This is an example
self.write_to_log('Someone from {} sent us "{}"'.format(sender_address,
data_received))
# A way to stop the server from going on forever
# But you could do this other ways but it depends what condition
# should cause the shutdown
if data_received.startswith(b"QUIT"):
finishing_thread = threading.Thread(target=self.finish_in_another_thread)
finishing_thread.start()
# This will be called in another thread to terminate the server
# self.server is also defined in BaseRequestHandler
def finish_in_another_thread(self):
self.write_to_log("Shutting down the server")
self.server.shutdown()
# Write something (with a timestamp) to a text file so that we
# know something is happenning
def write_to_log(self, message):
timestamp = datetime.datetime.now()
timestamp_text = timestamp.isoformat(sep=' ', timespec='seconds')
with open(self.log_file_name, mode='a') as log_file:
log_file.write("{}: {}\n".format(timestamp_text, message))
service_address = "localhost"
port_number = 49101
server = socketserver.TCPServer((service_address, port_number),
SleepyGaryReceptionHandler)
server.serve_forever()
I'm using here the socketserver module instead of listening directly at a socket. This standard library module has been written to simplify writing a server. so use it!
All I do here is write to a text file what has been received. You would have to adapt it to your use.
But to have it running continuously use a cron job but to start it at the startup of the computer. Since this script will block until the server is stopped, we have to run it in the background. It would look something like that:
#reboot /usr/bin/python3 /home/sleepygary/sleppys_server.py &
I have tested it and after 5 hours it still does his thing.
Now like I said, it is a hack. If you want to go all the way and do things like any other services on your computer you have to program it in a certain way. You can find more information on this page: https://www.freedesktop.org/software/systemd/man/daemon.html
I'm really tired so there may be some errors here and there.
Imagine you have a HTTP server on your local machine, this is a typical Python/Twisted application. This server is used to access your local data, server is used just as a GUI interface. So user can use his web browser or special application ( acts like a web browser ) to access his local data.
Now you want to be sure that only local user who physically sit near this machine get access to the HTTP server.
Also I will have FTP server and it must be protected the same way too.
At the moment I am running such code for my HTTP server:
class LocalSite(server.Site):
def buildProtocol(self, addr):
if addr.host != '127.0.0.1':
print 'WARNING connection from ' + str(addr)
return None
try:
res = server.Site.buildProtocol(self, addr)
except:
res = None
return res
So I am just check the IP address at the moment and I am not sure this is enough.
Is there any ways to emulate local IP from remote machine.?
Well, If a bad guy get access over my OS I have no way to protect - but this is not my deal. My firewall and antivirus should care about this, right?
Anyway, I would like to listen any extra ideas about increase security of such HTTP server.
May be we can use MAC address to verify connection.?
Check the processes on local machine and detect which is actually executes connection?
We can use HTTPS, but in my understanding this acts in opposite direction: this is for user to trust to the server, not server to trust to the user.
Using CAPTCHA is a kind of solution. But I do not like this at all (it strains users) and this will not work for FTP server.
I am also use random port number every time application starts.
The type of internet connection is not defined - this is a p2p application. Any user in the WEB can use my software and it must be protected against remote access.
I believe the way you handled it is good enough. About it being cross-platform, I believe it is as Windows(starting from windows 7) too maps localhost to 127.0.0.1 but for previous versions, you have to define localhost in the main hosts file.
I want to write a Python script that will check the users local network for other instances of the script currently running.
For the purposes of this question, let's say that I'm writing an application that runs solely via the command line, and will just update the screen when another instance of the application is "found" on the local network. Sample output below:
$ python question.py
Thanks for running ThisApp! You are 192.168.1.101.
Found 192.168.1.102 running this application.
Found 192.168.1.104 running this application.
What libraries/projects exist to help facilitate something like this?
One of the ways to do this would be the Application under question is broadcasting UDP packets and your application is receiving that from different nodes and then displaying it. Twisted Networking Framework provides facilities for doing such a job. The documentation provides some simple examples too.
Well, you could write something using the socket module. You would have to have two programs though, a server on the users local computer, and then a client program that would interface with the server. The server would also use the select module to listen for multiple connections. You would then have a client program that sends something to the server when it is run, or whenever you want it to. The server could then print out which connections it is maintaining, including the details such as IP address.
This is documented extremely well at this link, more so than you need but it will explain it to you as it did to me. http://ilab.cs.byu.edu/python/
You can try broadcast UDP, I found some example here: http://vizible.wordpress.com/2009/01/31/python-broadcast-udp/
You can have a server-based solution: a central server where clients register themselves, and query for other clients being registered. A server framework like Twisted can help here.
In a peer-to-peer setting, push technologies like UDP broadcasts can be used, where each client is putting out a heartbeat packet ever so often on the network, for others to receive. Basic modules like socket would help with that.
Alternatively, you could go for a pull approach, where the interesting peer would need to discover the others actively. This is probably the least straight-forward. For one, you need to scan the network, i.e. find out which IPs belong to the local network and go through them. Then you would need to contact each IP in turn. If your program opens a TCP port, you could try to connect to this and find out your program is running there. If you want your program to be completely ignorant of these queries, you might need to open an ssh connection to the remote IP and scan the process list for your program. All this might involve various modules and libraries. One you might want to look at is execnet.