facing problem when trying to send an email using python - python

I wrote the code like this
import smtplib
server=smtplib.SMTP('localhost')
Then it raised an error like
error: [Errno 10061] No connection could be made because the target machine actively refused it
I am new to SMTP, can you tell what exactly the problem is?

It sounds like SMTP is not set up on the computer you are trying this from. Try using your ISP's mail server (often something like mail.example.com) or make sure you have an SMTP server installed locally.

Rather than trying to install smtp library locally, you can setup a simple smtp server on a console.
Do this:
python -m smtpd -n -c DebuggingServer localhost:1025
And all mails will be printed to the console.

To send e-mail using the Python SMTP module you need to somehow obtain the name of a valid mail exchanger (MX). That's either a local hub or smart relay of your own or you can query DNS for the public MX records for each target host/domain name.
This requirement is glossed over in the docs. It's a horrible omission in Python's standard libraries is that they don't provide an easy way to query DNS for an MX record. (There are rather nice third party Python DNS libraries such as DNSPython and PyDNS with extensive support for way more DNS than you need for anything related to e-mail).
In general you're probably better off using a list of hubs or relays from your own network (or ISP). This is because your efforts to send mail directly to the published MX hosts may otherwise run afoul of various attempts to fight spam. (For example it frequently won't be possible from wireless networks in coffee shops, hotels, and across common household cable and DSL connections; most of those address ranges are listed in various databases as potential sorts of spam). In that case you could store and/or retrieve the names/address of your local mail hubs (or smart relays) through any means you like. It could be a .cfg file (ConfigParser), or through an LDAP or SQL query or even (gasp!) hard-coded into your scripts.
If, however, your code is intended to run on a suitable network (for example in a colo, or a data center) then you'll have to do your own MX resolution. You could install one of the aforementioned PyPI packages. If you need to limit yourself to the standard libraries then you might be able to rely on the commonly available dig utility that's included with most installations of Linux, MacOS X, Solaris, FreeBSD and other fine operating systems.
In that case you'd call a command like dig +short aol.com mx | awk '{print $NF}' through subprocess.Popen() which can be done with this rather ugly one-liner:
mxers = subprocess.Popen("dig +short %s mx | awk '{print $NF}'"
% target_domain, stdout=subprocess.PIPE,
shell=True).communicate()[0].split()
Then you can attempt to make an SMTP connection to each of the resulting hostnames in
turn. (This is fine so long as your "target_domain" value is adequately sanitized; don't pass untrusted data through Popen() with shell=True).
The safer version looks even hairier:
mxers = subprocess.Popen(["dig", "+short", target_domain, "mx"],
stdout=subprocess.PIPE).communicate()[0].split()[1::2]
... where the slice/stride at the end is replacing the call to awk and thus obviating
the need for shell=True

Related

How to route internet traffic via `Clash for Windows` (Ping from Python code is not working)

from os import system
system("ping www.twitter.com")
system("ping www.yahoo.com")
system("ping www.facebook.com")
I am in China, and Twitter and Facebook are banned here. I can open them in the browser using Clash for Windows software.
I have to download tweets from Twitter. So I need to ping the websites using Python to get tweets. I cannot ping the websites though.
How do I make my Python code use the Clash for Windows.
Output of the above code:
Pinging www.twitter.com [108.160.169.186] with 32 bytes of data:
Request timed out.
Request timed out.
Request timed out.
Request timed out.
Ping statistics for 108.160.169.186:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),
Pinging new-fp-shed.wg1.b.yahoo.com [180.222.102.201] with 32 bytes of data:
Reply from 180.222.102.201: bytes=32 time=258ms TTL=42
Reply from 180.222.102.201: bytes=32 time=229ms TTL=42
Reply from 180.222.102.201: bytes=32 time=230ms TTL=42
Request timed out.
Ping statistics for 180.222.102.201:
Packets: Sent = 4, Received = 3, Lost = 1 (25% loss),
Approximate round trip times in milli-seconds:
Minimum = 229ms, Maximum = 258ms, Average = 239ms
Pinging www.facebook.com [69.63.184.14] with 32 bytes of data:
Request timed out.
Request timed out.
Request timed out.
Request timed out.
Ping statistics for 69.63.184.14:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),
OS: Windows 10 (updated to latest edition). Using PyCharm as my IDE.
You said in comment that you are using Clash, however Clash is not a VPN:
Clash - A rule-based tunnel in Go.
Features:
Local HTTP/HTTPS/SOCKS server with authentication support
VMess, Shadowsocks, Trojan, Snell protocol support for remote connections
Built-in DNS server that aims to minimize DNS pollution attack impact, supports DoH/DoT upstream and fake IP.
Rules based off domains, GEOIP, IP CIDR or ports to forward packets to different nodes
Remote groups allow users to implement powerful rules. Supports automatic fallback, load balancing or auto select node based off
latency
Remote providers, allowing users to get node lists remotely instead of hardcoding in config
Netfilter TCP redirecting. Deploy Clash on your Internet gateway with iptables.
Comprehensive HTTP RESTful API controller
source: https://github.com/Dreamacro/clash
I'm not sure exactly how it works, my current understanding is that it allows you to use proxies.
Although it also has TUN mode as a "Premium Feature" (not sure what does it mean, I see no option to buy "Premium") which may work similarly to VPN, but I'm not sure:
Premium Features:
TUN mode on macOS, Linux and Windows. Doc
Match your tunnel by Script
Rule Provider
Clash for Windows which you try to use, is GUI for Clash.
Documentation is available, but it is only in Chinese:
https://github.com/Fndroid/clash-win-docs-new
screenshot:
I tried to use it, but I don't know how to configure it. It didn't work at all.
I see many possible solutions:
You can switch to proper VPN. Paid VPNs are more recommended than free ones. Cost is typically 3-10$/month depending on offer. Alternatively you can try to setup OpenVPN on your own VPS, it may be a bit cheaper, but not necessarily.
You can configure your script to use proxy. Libraries like requests support it: Proxies with Python 'Requests' module
You can try to read Clash for Windows documentation to see if it is possible what you try to achieve. Maybe it is enough to turn on System Proxy as visible on screenshot above?
You can try to configure Clash in TUN mode. In my opinion it may be more difficult than solutions 1 and 2. If you prefer this way, I suggest to read Clash documentation thoroughly: https://github.com/Dreamacro/clash/wiki/premium-core-features#tun-device
If you can afford it, I recommend solution 1 in most use-cases. I understand that you prefer free solution, however servers are not free, somebody needs to incur costs of running them (hardware, electricity etc.)
There is a variety of Python libraries that can help you. openpyn is one of them. Firstly, call this command in your terminal for setup:
sudo openpyn --init
After which all your Internet traffic can be redirected to a VPN server using the following command:
openpyn us
The command above is the default, to transfer all the traffic to the US. Other locations can be chosen, see the link above for more info.
As soon as your traffic is redirected, you are free to see banned sites as you did:
from os import system
system("ping www.twitter.com")
system("ping www.facebook.com")
When you have VPN running and active then it should redirect all traffic via VPN server. If it is not redirecting all traffic, then maybe it was configured to redirect only webrowser traffic. Also it is possible that you aren't using true VPN, but a proxy.
Please share which VPN are you using, it will help us to help you.
You can just start VPN manually and then try executing your Python code.
Alternatively you can control when your VPN is running from Python. It requires different library for each VPN provider. Please tell us which VPN do you use.
This is working example for NordVPN using nordvpn_switcher package:
import time
from nordvpn_switcher import initialize_VPN,rotate_VPN,terminate_VPN
initialize_VPN(save=1,area_input=['complete rotation'])
for i in range(1):
rotate_VPN()
from os import system
system("ping www.twitter.com")
system("ping www.yahoo.com")
system("ping www.facebook.com")
print('\nDo whatever you want here (e.g. pinging). Pausing for 10 seconds...\n')
time.sleep(10)
terminate_VPN()
Tor uses SOCKS 5 proxy server to give anonymity. In case you're using Tor, there are two ways to use SOCKS 5 proxy, one is to configure it at application level(browser, python code etc..), another to configure at Network Interface level. If you're using Tor, simply follow this answer, socks server is running at localhost:9050 by default - How to make python Requests work via socks proxy
Since you haven't done anything and it's already working, I guess you're using tunnel-based VPN. In this case, it should work automatically. In your case, ping could be blocked by the VPN provider.
Ping uses ICMP protocol while HTTP/HTTPS uses TCP protocol in the Transport layer (in OSI model). They are very different things, and one working doesn't guarantee other working. In many cases, ping is blocked, by the server or other middlewares that don't support ICMP protocols. Most cloud providers don't support ICMP protocols in their networking components, for ex. Azure doesn't, for their Load Balancers. So, instead of trying ping, you should try the real http request. Following is sample code
import requests
r = requests.get("www.google.com")
You can use this code:
from ping3 import ping
p=ping("example.com")
print(p)
Here is the solution:
go to https://www.wintun.net and download the latest release, copy the right wintun.dll into Clash home directory
restart Clash for windows
open clash dashboard, switch TUN Mode on
https://github.com/Dreamacro/clash/wiki/premium-core-features#windows
Here is an update for people facing the same problem.
There is an option in settings>System Proxy>Specify Protocol in Clash for Windows , I turned it on.
Before i was not able to run pip commands, after turning this on, i can do that. I hope it will be useful for someone. (This is not exact answer to the question but indeed it is useful for someone, i am sure)
Let's try to isolate this problem if it's Python or network-related.
Does it work if you run ping in the shell directly?
ping twitter.com
It's possible that your VPN has a setting that blocks Pings from the internet. It should be one of the configurations in the Firewall.

Python (but also C) Strange sorting done by gethostbyname

I have found this problem in Python but I was also able to reproduce it with a basic C program.
I am in CentOS 6 (tested also on 7), I have not tested on other Linux distributions.
I have an application on 2 VMs. One has IP address 10.0.13.30 and the other is 10.0.13.56. They have a shared FQDN to allow load-balancing (and high availability) DNS based using gethostbyname or getaddrinfo (it is what is suggested in Python doc).
If my client application is on a different sub-net (10.0.12.x for example), I have no problem: the socket.gethostbyname(FQDN) is returning randomly 10.0.13.30 and 10.0.13.56.
But if my client application is on the same sub-network, it returns always the same entry. And it seems to be always the "closest": I have depoyed it on 10.0.13.31 and it returns always 10.0.13.30 and on 10.0.13.59 it returns always 10.0.13.56.
On these servers CLI commands such as ping and dig are returning the result in different orders almost each time
I have searched many subjects and I concluded that it seems to be a kind of "prioritization to improve the success chances done by glibc" but I have not found any way to disable it.
Because clearly in my case the 2 clients and the 2 servers VMs are on the VMware connected to a single router, so I do not see how the fact that the last byte of the IP of the server is closest to the last byte of the IP of the client is taken into account.
This is a replication of a problem that I have at customer side so it is not an option for me to just move the VMs to a different sub-net :-( ....
Anybody has an idea to have correct load-balancing in the same sub-network? I can partially control the VM config so if a settings has to be changed I can do it.
Instead of hoping that the standard library will do load balancing for you, use socket.getaddrinfo() and choose one of the resulting hosts at random explicitly. This will also make it easy to fail over to a different host if the first one you try is not available.

Query machine for hostname

I want to be able to scan a network of servers and match IP addresses to hostnames.
I saw a lot of questions about this (with a lot of down votes), but none are exactly what I'm looking for.
So I've tried python's socket library socket.gethostbyaddr(ip). But this only returns results if I have a DNS setup or the IP-to-host mapping is in my hosts file.
I want to be able to ask a machine for their hostname, rather than querying DNS.
How can a query a Linux machine for their hostname?
Preferably using python or bash, but other ways are good too.
You can remotely execute the command hostname command on these machines to acquire the Hostname

Reading values over ssh in python

I would like to be able to gather the values for number of CPUs on a server and stuff like storage space etc and assign them to local variables in a python script. I have paramiko set up, so I can SSH to remote Linux nodes and run arbitrary commands on them, and then have the output returned to the script. However, many commands are very verbose "such as df -h", when all I want to assign is a single integer or value.
For the case of number of CPUs, there is Python functionality such as through the psutil module to get this value. Such as 'psutil.NUM_CPUS' which returns an integer. However, while I can run this locally, I can't exactly execute it on remote nodes as they don't have the python environment configured.
I am wondering how common it is to manually parse output of linux commands (such as df -h etc) and then grab an integer from it (similar to how bash has a "cut" function). Or whether it is somehow better to set up an environment on each remote server (or a better way).
Unfortunately it is very common to manually parse output of linux commands, but you shouln't. This is a really common server admin task and you shouldn't re invent the wheel.
You can use something like sar to log remote stats and retrieve the reports over ssh.
http://www.ibm.com/developerworks/aix/library/au-unix-perfmonsar.html
You should also look at salt. It lets you run the same command on multiple machines and get their output.
http://www.saltstack.com/
These are some of the options but remember to keep it DRY ;)
Like #Floris, I believe the simplest way is to design your commands so that the result is simple to parse. However, parsing the result of one command is not at all uncommon, bash scripts are full of grep, sed, wc or awk commands that do exactly that.
The same approach is used by psutils itself, see how it reads /proc/cpuinfo for cpu_count. You can implement the same parsing, only reading the distant /proc/cpuinfo, or counting the lines in the output of ls -1 /sys/bus/cpu/devices/.
Actually, the best way to get information will be from /proc and /sys, they are specially designed to ease access to internal information from simple programs, with minimal parsing needed.
If you can put your own programs or scripts on the remote machine there are a couple of things you can do:
Write a script on the remote machine that outputs just what you want, and execute that over ssh.
Use ssh to tunnel a port on the other machine and communicate with a server on the remote machine which will respond to requests for information with the data you want over a socket.
Since you already have an SSH connection I would suggest to wrap your commands with
Python's sh library. It's really nice for this kind of task and you get results really fast.
from sh import ssh
myserver = ssh.bake("myserver.com", p=1393)
print(myserver) # "/usr/bin/ssh myserver.com -p 1393"
# resolves to "/usr/bin/ssh myserver.com -p 1393 whoami"
iam2 = myserver.whoami()

Decentralized networking in Python - How?

I want to write a Python script that will check the users local network for other instances of the script currently running.
For the purposes of this question, let's say that I'm writing an application that runs solely via the command line, and will just update the screen when another instance of the application is "found" on the local network. Sample output below:
$ python question.py
Thanks for running ThisApp! You are 192.168.1.101.
Found 192.168.1.102 running this application.
Found 192.168.1.104 running this application.
What libraries/projects exist to help facilitate something like this?
One of the ways to do this would be the Application under question is broadcasting UDP packets and your application is receiving that from different nodes and then displaying it. Twisted Networking Framework provides facilities for doing such a job. The documentation provides some simple examples too.
Well, you could write something using the socket module. You would have to have two programs though, a server on the users local computer, and then a client program that would interface with the server. The server would also use the select module to listen for multiple connections. You would then have a client program that sends something to the server when it is run, or whenever you want it to. The server could then print out which connections it is maintaining, including the details such as IP address.
This is documented extremely well at this link, more so than you need but it will explain it to you as it did to me. http://ilab.cs.byu.edu/python/
You can try broadcast UDP, I found some example here: http://vizible.wordpress.com/2009/01/31/python-broadcast-udp/
You can have a server-based solution: a central server where clients register themselves, and query for other clients being registered. A server framework like Twisted can help here.
In a peer-to-peer setting, push technologies like UDP broadcasts can be used, where each client is putting out a heartbeat packet ever so often on the network, for others to receive. Basic modules like socket would help with that.
Alternatively, you could go for a pull approach, where the interesting peer would need to discover the others actively. This is probably the least straight-forward. For one, you need to scan the network, i.e. find out which IPs belong to the local network and go through them. Then you would need to contact each IP in turn. If your program opens a TCP port, you could try to connect to this and find out your program is running there. If you want your program to be completely ignorant of these queries, you might need to open an ssh connection to the remote IP and scan the process list for your program. All this might involve various modules and libraries. One you might want to look at is execnet.

Categories

Resources