How do I force a DNS resolution in python grpc? - python

I use the python grpcio package to connect to a grpc service via host name something like this:
credentials = grpc.composite_channel_credentials(channel_credentials, call_credentials)
return grpc.aio.secure_channel(domain, credentials)
Many (over 1000) channels are created during the scripts life time (on purpose).
the service is load balanced and resolves to multiple IP addresses.
I frequently run into the issue, that during the scripts' startup DNS resolution is done only once, one of the IP addresses is picked and all requests are being sent to that one IP address leading to denial of service.
How can I force a truly random DNS resolution every time I create a channel so ideally every IP address is hit equally?
I found this document about grpc load balancing, but it doesn't seem to have actionable information about solving the problem: https://grpc.io/blog/grpc-load-balancing/

Related

Use python to naively find an Arduino's IP address from another PC on the local network

I have a ESP8266 Nodemcu device running a local HTTP server. I followed the quick-start instructions here.
My goal is to have a large number of these devices running in sync. To do that, I wrote this script:
#!/usr/bin/env python
import time
import sys
import socket
import requests
def myFunction():
#This is what I have right now...
ipAddresses = ["192.168.1.43", "192.168.1.44"]
#
#Instead, I need to search the local network for all the arduinos [ESP8266s] and append their address to the array.
#
for address in ipAddresses:
TurnOnLED(address)
def TurnOnLED(address):
try:
r = requests.post('http://' + address + '/LedON')
r.close()
except requests.exceptions.ConnectionError:
pass
#Main
def Main():
try:
print("Press CTRL-C to stop.")
while ON:
myFunction()
time.sleep(60)
except KeyboardInterrupt:
sys.exit(0)
if __name__ == "__main__":
Main()
This works, and allows me to control all of my devices from my desktop PC. My difficulty is with finding each IP address dynamically. I considered assigning a static IP address, but I would like them to be retail-optimized, so I cannot guarantee any particular IP address being unused out-of-the-box.
For the same reason, I want to be able to install the same arduino code on all of them. This is what I mean by naive. I want my python script to find the IP address of each device over the local network without any [unique] 'help' from the devices themselves. I want to install the same code on each device, and have my script find them all without any additional set up.
My first attempt was using the socket python module and looking for every hostname that began with "ESP-" since those are the first four characters of the factory hostname on all the boards. My query consistently turned up nothing, though.
This answer here has some good information, but my solution runs on a Macintosh, and I will not have the full host name, only "ESP-".
So, now that I know what are the constraints, and what is just what is in the tutorial, here are my 2 cents (keep in mind that I too, am just a hobbyist about everything that has "voltage". And not even a good one).
1st strategy : PC is server
So, if I assume that your devices are, for example, temperature sensors, and you want to frequently grab all the temperatures, then, one strategy could be that those devices all connect, every minute, to the server, report the temperature, and disconnect. For example, using a HTTPS request to do so.
So host the server on your PC, and have your devices behave as clients
# Example request from micropython
import urequests
rep = urequests.get(URL)
print(rep.text)
response.close()
2nd strategy : a third party is server
A variant of that strategy is to have a sort of central data repository, that acts as a server. So, I mean, your PC is not the server, nor any ESP, but another machine (which could be one ESP, a rasppi, or even a droplet or EC2 instance in the cloud, doesn't matter. As long as you can host a backend on it).
So, ESP8266 are clients of that server. And so is your PC. So that server has to be able to answer to request "set value from client", and "get value for the PC".
The drawback of those first 2 strategies, is that they are ok when the devices are the one sending data. It fits less if they are (as in your led example, but I surmise that was just an example) the ones receiving mostly the data. Because then, you would have to wait for the ESP to connect to have it get the message "you need to switch on the led".
3rd strategy : central registry
Or you can have it both way. That is keep your current architecture. When your PC wants something from the ESP8266, it has to connect to them, that are servers, and send it request.
But, in parallel to that, ESP8266 also behave as clients to register themselves in a central directory, which could be on the PC, or on a 3rd party. The objective of that central directory is not to gather the data from the ESP8266. Just to gather a uptodate list of them. Each minute they, in parallel with their server activity, the ESP8266 send a message "I am alive" to this central dircetory. And then the PC (that could be, or not, hosting that central directory) just need to get all IP associated with a not too old "I am alive" message, to get a list of IP.
4th strategy : ARP
Once your PC is on, it could scan the network, using an ARP request, with scapy. Search for "scanning local network with scapy".
For example here is a tutorial
From there, you get a list of IP. Associated with MAC address. And now, you need to know which ones are the ESP8266. You can here also apply several ideas. For example, you may use the MAC address to find guess which one are the ESP8266. Or you may simply try a dummy request on all found IP, to check which one are the ESP8266 (using a specific API of the server code of the ESP8266 you wrote)
Or you may decide to host the server on each ESP8266 on a specific port, like 8123, so that you can quickly rule out devices whose port 8123 is not listening.
5th strategy : don't reinvent the wheel
The best strategy is clearly a mix between my second and my third. Having a directory, and a 3rd party handling messages. But that is reinventing message brokers.
There is one well known middleware, fitted for ESP8266 (I mean, for IoT low profile devices), that is MQTT.
That needs some more tutorial reading and long trial and error on your behalf. You can start here for example. But that is just the 1st example I found on Google searching "MQTT ESP8266 micropython". There are zillions resources on that.
It may seem to be not the easiest way (compared to just copy and paste some code that list all the alive IP on a network). But in the long run, I you intend to have many ESP8266, so many of them that you can't afford to assign them static IP and simply list their IP, you probably really need a message broker like that, a preferably, not one that your reinvent

How to route internet traffic via `Clash for Windows` (Ping from Python code is not working)

from os import system
system("ping www.twitter.com")
system("ping www.yahoo.com")
system("ping www.facebook.com")
I am in China, and Twitter and Facebook are banned here. I can open them in the browser using Clash for Windows software.
I have to download tweets from Twitter. So I need to ping the websites using Python to get tweets. I cannot ping the websites though.
How do I make my Python code use the Clash for Windows.
Output of the above code:
Pinging www.twitter.com [108.160.169.186] with 32 bytes of data:
Request timed out.
Request timed out.
Request timed out.
Request timed out.
Ping statistics for 108.160.169.186:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),
Pinging new-fp-shed.wg1.b.yahoo.com [180.222.102.201] with 32 bytes of data:
Reply from 180.222.102.201: bytes=32 time=258ms TTL=42
Reply from 180.222.102.201: bytes=32 time=229ms TTL=42
Reply from 180.222.102.201: bytes=32 time=230ms TTL=42
Request timed out.
Ping statistics for 180.222.102.201:
Packets: Sent = 4, Received = 3, Lost = 1 (25% loss),
Approximate round trip times in milli-seconds:
Minimum = 229ms, Maximum = 258ms, Average = 239ms
Pinging www.facebook.com [69.63.184.14] with 32 bytes of data:
Request timed out.
Request timed out.
Request timed out.
Request timed out.
Ping statistics for 69.63.184.14:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),
OS: Windows 10 (updated to latest edition). Using PyCharm as my IDE.
You said in comment that you are using Clash, however Clash is not a VPN:
Clash - A rule-based tunnel in Go.
Features:
Local HTTP/HTTPS/SOCKS server with authentication support
VMess, Shadowsocks, Trojan, Snell protocol support for remote connections
Built-in DNS server that aims to minimize DNS pollution attack impact, supports DoH/DoT upstream and fake IP.
Rules based off domains, GEOIP, IP CIDR or ports to forward packets to different nodes
Remote groups allow users to implement powerful rules. Supports automatic fallback, load balancing or auto select node based off
latency
Remote providers, allowing users to get node lists remotely instead of hardcoding in config
Netfilter TCP redirecting. Deploy Clash on your Internet gateway with iptables.
Comprehensive HTTP RESTful API controller
source: https://github.com/Dreamacro/clash
I'm not sure exactly how it works, my current understanding is that it allows you to use proxies.
Although it also has TUN mode as a "Premium Feature" (not sure what does it mean, I see no option to buy "Premium") which may work similarly to VPN, but I'm not sure:
Premium Features:
TUN mode on macOS, Linux and Windows. Doc
Match your tunnel by Script
Rule Provider
Clash for Windows which you try to use, is GUI for Clash.
Documentation is available, but it is only in Chinese:
https://github.com/Fndroid/clash-win-docs-new
screenshot:
I tried to use it, but I don't know how to configure it. It didn't work at all.
I see many possible solutions:
You can switch to proper VPN. Paid VPNs are more recommended than free ones. Cost is typically 3-10$/month depending on offer. Alternatively you can try to setup OpenVPN on your own VPS, it may be a bit cheaper, but not necessarily.
You can configure your script to use proxy. Libraries like requests support it: Proxies with Python 'Requests' module
You can try to read Clash for Windows documentation to see if it is possible what you try to achieve. Maybe it is enough to turn on System Proxy as visible on screenshot above?
You can try to configure Clash in TUN mode. In my opinion it may be more difficult than solutions 1 and 2. If you prefer this way, I suggest to read Clash documentation thoroughly: https://github.com/Dreamacro/clash/wiki/premium-core-features#tun-device
If you can afford it, I recommend solution 1 in most use-cases. I understand that you prefer free solution, however servers are not free, somebody needs to incur costs of running them (hardware, electricity etc.)
There is a variety of Python libraries that can help you. openpyn is one of them. Firstly, call this command in your terminal for setup:
sudo openpyn --init
After which all your Internet traffic can be redirected to a VPN server using the following command:
openpyn us
The command above is the default, to transfer all the traffic to the US. Other locations can be chosen, see the link above for more info.
As soon as your traffic is redirected, you are free to see banned sites as you did:
from os import system
system("ping www.twitter.com")
system("ping www.facebook.com")
When you have VPN running and active then it should redirect all traffic via VPN server. If it is not redirecting all traffic, then maybe it was configured to redirect only webrowser traffic. Also it is possible that you aren't using true VPN, but a proxy.
Please share which VPN are you using, it will help us to help you.
You can just start VPN manually and then try executing your Python code.
Alternatively you can control when your VPN is running from Python. It requires different library for each VPN provider. Please tell us which VPN do you use.
This is working example for NordVPN using nordvpn_switcher package:
import time
from nordvpn_switcher import initialize_VPN,rotate_VPN,terminate_VPN
initialize_VPN(save=1,area_input=['complete rotation'])
for i in range(1):
rotate_VPN()
from os import system
system("ping www.twitter.com")
system("ping www.yahoo.com")
system("ping www.facebook.com")
print('\nDo whatever you want here (e.g. pinging). Pausing for 10 seconds...\n')
time.sleep(10)
terminate_VPN()
Tor uses SOCKS 5 proxy server to give anonymity. In case you're using Tor, there are two ways to use SOCKS 5 proxy, one is to configure it at application level(browser, python code etc..), another to configure at Network Interface level. If you're using Tor, simply follow this answer, socks server is running at localhost:9050 by default - How to make python Requests work via socks proxy
Since you haven't done anything and it's already working, I guess you're using tunnel-based VPN. In this case, it should work automatically. In your case, ping could be blocked by the VPN provider.
Ping uses ICMP protocol while HTTP/HTTPS uses TCP protocol in the Transport layer (in OSI model). They are very different things, and one working doesn't guarantee other working. In many cases, ping is blocked, by the server or other middlewares that don't support ICMP protocols. Most cloud providers don't support ICMP protocols in their networking components, for ex. Azure doesn't, for their Load Balancers. So, instead of trying ping, you should try the real http request. Following is sample code
import requests
r = requests.get("www.google.com")
You can use this code:
from ping3 import ping
p=ping("example.com")
print(p)
Here is the solution:
go to https://www.wintun.net and download the latest release, copy the right wintun.dll into Clash home directory
restart Clash for windows
open clash dashboard, switch TUN Mode on
https://github.com/Dreamacro/clash/wiki/premium-core-features#windows
Here is an update for people facing the same problem.
There is an option in settings>System Proxy>Specify Protocol in Clash for Windows , I turned it on.
Before i was not able to run pip commands, after turning this on, i can do that. I hope it will be useful for someone. (This is not exact answer to the question but indeed it is useful for someone, i am sure)
Let's try to isolate this problem if it's Python or network-related.
Does it work if you run ping in the shell directly?
ping twitter.com
It's possible that your VPN has a setting that blocks Pings from the internet. It should be one of the configurations in the Firewall.

Receiving a response from an unknown host in Python on a Raspberry Pi

I'm developing an application in Python for the Raspberry Pi that requires some level of configuration for use. This configuration can be done by hand, but given the eventual placement of the Pi's themselves, hooking up a monitor and keyboard will be unfeasible. Thus, some level of auto-configuration will be required.
The configuration pretty much just sets some properties in the program before it begins being used, so ideally this configuration information could be delivered via a web response (the Raspberry Pi will always be connected to a local intranet via the eth0 interface).
However, the issue is that the webserver which will deliver the configuration data to the Pi has a hostname which is unpredictable. The server may reside at 10.0.0.2, 10.0.0.3, or 192.168.1.2 for that matter, there's no way of knowing.
I would like to try to use the broadcast address to essentially send a request to every host on the subnet and wait for a response that sounded right, eg with a 200 status code and some data that made sense. Then, knowing the host of the server, it could request the initial configuration data and continue.
I'm currently using the urllib Python module for my web requests but I quickly discovered that using the broadcast address to achieve this goal would be confusing, since I'm essentially sending out a single request but preparing for multiple (mostly bad) responses.
Has anyone out there done anything like this using Python?

EC2 fails to connect via FTPS, but works locally

I'm running Python 2.6.5 on ec2 and I've replaced the old ftplib with the newer one from Python2.7 that allows importing of FTP_TLS. Yet the following hangs up on me:
from ftplib import FTP_TLS
ftp = FTP_TLS('host', 'username', 'password')
ftp.retrlines('LIST') (Times out after 15-20 min)
I'm able to run these three lines successfully in a matter of seconds on my local machine, but it fails on ec2. Any idea as to why this is?
Thanks.
It certainly sounds like a problem related to whether or not you're in PASSIVE mode on your FTP connection, and whether both ends of the connection can support it.
The ftplib documentations suggests that it is on by default, which is a shame, because I was going to suggest that you turn it on. Instead, I'll suggest that you set_debuglevel to where you can see the lower levels of the protocol happening and see what mode you're in. That should give you information on how to proceed. Either you're in passive mode and the other end can't deal with it properly, or (hopefully) you'd not, but you should be.
FTP and FTPS (but not SFTP) can be configured so that the server makes a backwards connection to the client for the actual transfers or so that the client makes a second forward connection to the server for the transfers. The former, especially, is prone to complications whenever network address translation is involved. Without the TLS, some firewalls can actually rewrite the FTP session traffic to make it magically work, but with TLS that's impossible due to encryption.
The fact that are presumably authenticating and then timing out when you try to transfer data (LIST requires a 2nd connection in one direction or the other) is the classic symptom, usually, of a setup that either needs passive mode, OR, there's this:
Connect as usual to port 21 implicitly securing* the FTP control connection before authenticating. Securing the data connection requires the user to explicitly ask for it by calling the prot_p() method.
ftps.prot_p() # switch to secure data connection
ftps.retrlines('LIST') # list directory content securely
I don't work with FTPS often, since SFTP is so much less problematic, but if you're not doing that, the far end server might not be cooperating.
*note, I suspect this sentence is trying to say that FTP_TLS "implicitly secures the FTP control connection" in contrast with the explicit securing of the data connection.
If you're still having trouble could you try ruling out Amazon firewall problems. (I'm assuming you're not using a host based firewall.)
If your EC2 instance is in a VPC then in the AWS Management Console could you:
ensure you have an internet gateway
ensure that the subnet your EC2 instance is in has a default route (0.0.0.0/0) configured pointing at the internet gateway
in the Security Group for both inbound and outbound allow All Traffic from all sources (0.0.0.0/0)
in the Network ACLs for both inbound and outbound allow All Traffic from all sources (0.0.0.0/0)
If your EC2 instance is NOT in a VPC then in the AWS Management Console could you:
in the Security Group for inbound allow All Traffic from all sources (0.0.0.0/0)
Only do this in a test environment! (obviously)
This will open your EC2 instance up to all traffic from the internet. Hopefully you'll find that your FTPS is now working. Then you can gradually reapply the security rules until you find out the cause of the problem. If it's still not working then the AWS firewall is not the cause of the problem (or you have more than one problem).

is it possible sending data via NIC directly instead of via routing table lookup (linux command or python)

Suppose I have 3 NIC in one host PC, name them eth0, eth1 and eth2
All interface have it's own ip address in different subnet, however, all the gateway router of those NIC have route to one server I want to access, I want to establish 3 connetions to that server and get response via different NIC.
I setting static route with different metric in that host PC, which means all ethX have route to server.
Is it possible establish tcp sessions via different NIC directly in python or via shell command sorts of:
s1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s1.connect((HOST, PORT, eth1)) # eth1 is my fiction
# and in same program
s0 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s0.connect((HOST, PORT, eth0)) # ethO is my fiction
then the traffic can be send to that eth directly intead of via routing table lookup?
Thank!
The problem is that routing is determined by the destination and just because you have multiple routes available, doesn't mean that the OS will in fact use those multiple routes. What you want is the OS to take into account the source address when determining which outbound route to use. Once you get the routing correct, you can bind the source IP address on your socket and it will don the right thing.
In linux, you do this by creating multiple routing tables and setting up rules for picking those routing tables based on source IP address. In general, the documentation for this is pretty bad, but searching for "multiple default routes" and iproute2 can help narrow it down. Here are a couple of reasonable pages:
http://www.debian-administration.org/articles/377
http://lartc.org/howto/lartc.rpdb.multiple-links.html
OpenBSD also has support for multiple routing tables and I find it a bit easier to configure without hacking stuff into init scripts. (You can even just do it in pf.conf)

Categories

Resources