I am using the googleapiclient python api to start a vm, and then paramiko to connect to it via ssh.
I use googleapiclient.discovery to get the GCE api
compute = googleapiclient.discovery.build('compute', 'v1')
I start my vm using the start api call
req = compute.instances().start(project, zone, instance)
resp = request.execute()
while resp['status'] != 'DONE':
time.sleep(1)
resp = req.execute()
I then perform a get request to find the vm details, and in turn the ephemeral external ip address
req = compute.instances().get(project, zone, instance)
info = req.execute()
ip_address = info['networkInterfaces'][0]['accessConfigs'][0]['natIP']
Finally, I use paramiko to connect to this ip address.
ssh_client = paramiko.SSHClient()
ssh_client.connect(ip_address)
Non-deterministically, the connect call fails:
.../lib/python3.6/site-packages/paramiko/client.py", line 362, in connect
raise NoValidConnectionsError(errors)
paramiko.ssh_exception.NoValidConnections Error:
[Errno None] Unable to connect to port 22 on xxx.xxx.xxx.xxx
It seems to be timing related, as putting in a time.sleep(5) before the ssh_client.connect call has preventing this error.
I'm assuming this allows sufficient time for sshd to start accepting connections, but I'm not certain.
Putting sleeps in my code is uber hacky, so I'd much prefer to find a way to deterministically wait until the ssh daemon is running and available for me to connect to it (if that is indeed the cause of the NoValidConnections exception)
Is there a way to instruct the GCE api to only return from start when the VM is running and sshd is available for me to connect to?
Is there a way to request this information using the GCE api?
Alternately I see paramiko has a timeout option in the connect call - should I just change my 5 second sleep to a 5 second timeout?
There’s no way for GCE to know if the guest is SSH-able. (For instance, imagine a case where the guest uses a nonstandard method for allowing remote connections, so even checking sshd wouldn’t work. Even if you could rely on sshd, the way to check that it’s running depends on its version, host OS, configuration, etc.) GCE only knows hardware-level information about the VM, such as whether it rebooted.
To solve your problem, I would try the timeout mechanism in paramiko like you described, or maybe retry the connection attempt in a loop with a timeout since paramiko might not implement a full-state-reset retry internally (just speculating, I’m not sure).
Also, I think 5 seconds may be a little low — it’s probably fine for average response time, but outliers will be slower, which could cause your connection attempts to be flaky. Maybe bump that to 30 seconds or a minute just to be totally safe.
Related
Similarly to Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable I cannot start my container because it does not (need to) listen on a port. It is a Discord bot that just needs outbound connections for the APIs.
Is there a way I can get around this error? I've tried listening on port 0.0.0.0:8080 using socket module with
import socket
s = socket.socket()
s.bind(("0.0.0.0", 8080))
s.listen()
Cloud Run is oriented to request-driven tasks and this explains Cloud Run's listen requirement.
Generally (!) clients make requests to your Cloud Run service endpoint triggering the creation of instances to process the requests and generate responses.
Generally (!) if there are no outstanding responses to be sent, the service scales down to zero instances.
Running a bot, you will need to configure Cloud Run artificially to:
Always run (and pay for) at least one instance (so that the bot isn't terminated)
Respond (!) to incoming requests (on one thread|process)
Run your bot (on one thread|process)
To do both #2 and #3 you'll need to consider Python multithreading|multiprocessing.
For #2, the code in your question is insufficient. You can use low-level sockets, but it will need to respond to incoming requests and so you will need to implement a server. It would be simpler to use e.g. Flask which gives you an HTTP server with very little code.
And this server code only exists to satisfy the Cloud Run requirement, it is not required for your bot.
If I were you, I'd run the bot on a Compute Engine VM. You can do this for Free.
If your bot is already packaged using a container, you can deploy the container directly to a VM.
I have a problem with checking my service on other windows or Linux servers.
My problem is that I have to make a request from one server to the other servers and check if the vital services of those servers are active or disabled.
I wrote Python code to check for services, which only works on a local system.
import psutil
def getService(name):
service = None
try:
service = psutil.win_service_get(name)
service = service.as_dict()
except Exception as ex:
print(str(ex))
return service
service = getService('LanmanServer')
if service:
print("service found")
else:
print("service not found")
if service and service['status'] == 'running':
print("service is running")
else:
print("service is not running")
Does this code have this feature?
Or suggest another code؟
I have reviewed suggestions such as using server agents (influx, ...), which are not working for my needs.
You can use the following code for your service. i think these codes will help you
in your problem.
ip = your_ip
server_user = your_serviceuser
server_pass = your_pass
command = f"net use \\\\{ip} {server_pass} /USER:{server_user}"
os.system(command)
command = f"SC \\\\{ip} query SQLSERVERAGENT"
process = subprocess.Popen(command, stdout=subprocess.PIPE)
output, err = process.communicate()
output = str(str(str(str(output)[2:-1].replace(' ', '')).replace('\\t', '')).replace('\\r', '')).split('\\n')
if output[3] != 'STATE:4RUNNING':
print("service is running...")
As far as I know, psutil can only be used for gathering information about local processes, and is not suitable for retrieving information about processes running on other hosts. If you want to check whether or not a process is running on another host, there are many ways to approach this problem, and the solution depends on how deep you want to go (or need to go), and what your local situation is. From the top of my head, here are some ideas:
If you are only dealing with network services with exposed ports:
A very simple solution would involve using a script and a port scanner (nmap); if a port that a service is listening behind, is open, then we can assume that the service is running. Run the script every once in a while to check up on the services, and do your thing.
If you want to stay in Python, you can achieve the same end result by using Python's socket module to try and connect to a given host and port to determine whether or not the port that a service is listening behind, is open.
A Python package or tool for monitoring network services on other hosts like this probably already exists.
If you want more information and need to go deeper, or you want to check up on local services, your solution will have to involve a local monitor process on each host, and connecting to that process to gather information.
You can use your code to implement a server that lets clients connect to it, to check up on the services running on that host. (Check the socket module's official documentation for examples on how to implement clients and servers.)
Here's the big thing though. Based on your question and how it was asked, I would assume that you do not have the experience nor the insight to implement this in a secure way yet. If you're using this for a simple hobby/student project, roll out your own solution, and learn. Otherwise, I would recommend that you check out an existing solution like Nagios, and follow the security recommendations very closely.
I am using Pyro4 to make a remote connection between a raspberry and a computer. I've tested the code local on my computer. But now I want to use it on the raspberry. Only problem the target machine refused it. Nameserver is set, I can ask for the metadata, client is not giving any error.
Server code:
daemon = Pyro4.core.Daemon("192.168.0.199")
Pyro4.config.HOST = "192.168.0.199"
ns = Pyro4.locateNS()
print ns.lookup("client", return_metadata=True) #this works
callback = MainController()
daemon.register(callback)
vc2 = Pyro4.core.Proxy("PYRONAME:client#192.168.0.199:12345")
Client code:
ns = Pyro4.locateNS()
Pyro4.config.HOST = "192.168.0.199"
uri = daemon.register(VehicleController)
ns.register("client#192.168.0.199:12345", uri)
print "Connection set!"
daemon.requestLoop()
Firewall is also off.
Thanks
The main issue is that the server never runs the daemon request loop and so cannot respond to requests.
But there are a lot of issues with the code as shown:
it is not complete.
you're mixing up server and client responsibilities; why is the client running a deamon? That's the server's job.
you're registering an object with a logical name that appears to be a physical one. That's not how the name server works.
you're registering things in both the client and server.
the server never runs the request loop of the daemon it creates.
what is that 'vc2' proxy doing in the server? Clients are supposed to create proxies to server objects.
it's generally best to set Pyro's config variables before doing anything else, this way you don't have to repeat yourself with the IP address the daemon binds on.
All in all you seem to be confused about various core concepts of Pyro.
Getting a better understanding (have you worked through the tutorial chapter of the manual?) and fixing the code accordingly will likely resolve your issue.
Okay, got some more info
I can connect when I edit my Pyro4 Core URL from obj_ x #0.0.0.0: x to obj_ x #192.168.0.199: x and connect manually. So I guess there is something wrong with the way I register the address to the nameserver.
I'll keep you in touch
Tom
I have created a python socket server, using a class inherited from SocketServer.BaseRequestHandler, overriding setup and handle methods. Of cource, SocketServer.BaseRequestHandler.setup is called at the end of my own setup.
This is my server class
class MyServer(SocketServer.ForkingMixIn, SocketServer.TCPServer):
timeout = 30
A typical forking socket server.
Here is how I run my server
while True:
try:
server = MyServer((host, port), MyRequestHandler)
print('Server listening on', (host, port))
server.timeout = 300 # seconds
server.serve_forever()
except:
print('Error with server, retrying in 5 seconds...')
print(sys.exc_info())
sleep(5)
host and port are predefined, no problem with them.
Server works fine, except when clients count reaches 40. After this number, no new connections will be accepted, all will be refused. I checked this with a client test python script from my own system. Only 40!
Why 40? I have checked source code for SocketServer and found nothing related to this. I currently have no clue regarding this issue. Any, and I really mean it, any help is appreciated :))
Thanks in advance
OS: CentOS 6.5
This is probably unrelated to Python. Tune your Linux kernel, in testing phase do stuff like:
turn syncookies off
increase file handles available for the user (every socket opened is also a file handle used - maybe you're running out of them?)
look at stuff like this: http://people.redhat.com/alikins/system_tuning.html#tcp
and: http://people.redhat.com/alikins/system_tuning.html#fds
check if stuff like fail2ban is installed (http://www.fail2ban.org/wiki/index.php/Main_Page)
check if rate limits are applied by iptables (in testing phase you could do iptables -F after making sure that default chain policy is ACCEPT)
and last but not in the very least, check dmesg, /var/log/messages, /var/log/syslog, etc
One thing that theoretically might be related to Python is SO_REUSEADDR:
http://www.unixguide.net/network/socketfaq/4.5.shtml
Check if you have it set for your socket.
UPDATE:
I just realized that since the 40 connections that your socket server maxes out at is actually pretty low, the simplest option could be running your socket server through systrace, just use -f flag to track forked processes as well. You could e.g. start socket server, open 35 simultaneous connections, and then connect systrace to a running process and set up 5 more connections and see what systrace reports. Very often in such situations syscalls fail with errors that are visible in systrace and allow pinpointing root cause relatively easily.
I really have now idea how I missed this in source!
class ForkingMixIn:
"""Mix-in class to handle each request in a new process."""
timeout = 300
active_children = None
max_children = 40
Yeah, now I see the max_children property.
Thanks guys
Imagine you have a HTTP server on your local machine, this is a typical Python/Twisted application. This server is used to access your local data, server is used just as a GUI interface. So user can use his web browser or special application ( acts like a web browser ) to access his local data.
Now you want to be sure that only local user who physically sit near this machine get access to the HTTP server.
Also I will have FTP server and it must be protected the same way too.
At the moment I am running such code for my HTTP server:
class LocalSite(server.Site):
def buildProtocol(self, addr):
if addr.host != '127.0.0.1':
print 'WARNING connection from ' + str(addr)
return None
try:
res = server.Site.buildProtocol(self, addr)
except:
res = None
return res
So I am just check the IP address at the moment and I am not sure this is enough.
Is there any ways to emulate local IP from remote machine.?
Well, If a bad guy get access over my OS I have no way to protect - but this is not my deal. My firewall and antivirus should care about this, right?
Anyway, I would like to listen any extra ideas about increase security of such HTTP server.
May be we can use MAC address to verify connection.?
Check the processes on local machine and detect which is actually executes connection?
We can use HTTPS, but in my understanding this acts in opposite direction: this is for user to trust to the server, not server to trust to the user.
Using CAPTCHA is a kind of solution. But I do not like this at all (it strains users) and this will not work for FTP server.
I am also use random port number every time application starts.
The type of internet connection is not defined - this is a p2p application. Any user in the WEB can use my software and it must be protected against remote access.
I believe the way you handled it is good enough. About it being cross-platform, I believe it is as Windows(starting from windows 7) too maps localhost to 127.0.0.1 but for previous versions, you have to define localhost in the main hosts file.