Here is the code that i have till now
import socks
import socket
import requests
import json
socks.setdefaultproxy(proxy_type=socks.PROXY_TYPE_SOCKS5, addr="127.0.0.1", port=9050)
socket.socket = socks.socksocket
data = json.loads(requests.get("http://freegeoip.net/json/").text)
and it works fine. The problem is when i use a .onion url it shows error
Failed to establish a new connection: [Errno -2] Name or service not known
After researching a little i found that although the http request is made over tor the resolution still occours over clearnet. What is the proper way so i can also have the domain resolved over tor network to connect to .onion urls ?
Try to avoid the monkey patching if possible. If you're using modern version of requests, then you should have this functionality already.
import requests
import json
proxies = {
'http': 'socks5h://127.0.0.1:9050',
'https': 'socks5h://127.0.0.1:9050'
}
data = requests.get("http://altaddresswcxlld.onion",proxies=proxies).text
print(data)
It's important to specify the proxies using the socks5h:// scheme so that DNS resolution is handled over SOCKS so Tor can resolve the .onion address properly.
There is a more simple solution for this, but therefore you will need Kali Linux. If you have this OS, you can install tor service and kalitorify, start tor service with: sudo service tor start and start kalitorify with sudo kalitorify -t. Now your trafic will be send through tor and you can access .onion sites just as they would be normal sites.
Related
I am trying to import a python library using:
import cenpy as cp
but I get an error message:
ConnectionError: HTTPSConnectionPool(host='api.census.gov', port=443): Max retries exceeded with url: /data.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000013167B552B0>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond'))
I have had this issue before while calling a website. It has to do with the proxy settings. I resolved those other issues using code like this:
import requests
s = requests.Session()
s.proxies = {
"https":"https://user:pass#server:port",
"http":"http://user:pass#server:port"
}
and then:
s.get('http://web.address')
Is there anyway to implement the request session so that I am able to import the library?
Using Python 3.9.12
So I did some more digging and found out the library does place a call to the API during import. There seems to be a workaround for this but it is not implemented their code yet. I tried a few more things and I wanted to share what worked for me. You have to make sure that the code below runs prior to importing the library making the call. This code should allow for all other call/get requests to run through the proxy without having to use a requests session.
The snippets below will set the proxy environment variables
import os
os.environ['http_proxy'] = 'http://<user>:<pass>#<proxy>:<port>'
os.environ['https_proxy'] = 'http://<user>:<pass>#<proxy>:<port>'
Or to be more thorough:
import os
proxy = 'http://<user>:<pass>#<proxy>:<port>'
os.environ['http_proxy'] = proxy
os.environ['HTTP_PROXY'] = proxy
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
Remember that this should be at the very top of your script, or at least prior to any connection requests. Also, make sure you are using the correct IP address for the proxy, as that tripped me up as well.
Credit goes here and here.
I'm attending an online Python course for beginners. The content of a unit is to teach students to extract all links in the source code of a webpage. The code is as follows, with Block_of_Code unknown:
def get_page(url):
<Block_of_Code>
def get_next_target(page):
start_link=page.find('<a href=')
if start_link==-1:
return None,0
start_quote=page.find('"',start_link)
end_quote=page.find('"',start_quote+1)
url=page[start_quote+1:end_quote]
return url,end_quote
def print_all_links(page):
while True:
url,endpos=(get_next_target(page))
if url:
print(url)
page=page[endpos:]
else:
break
print_all_links(get_page('https://youtube.com'))
If I were not in China, the Block_of_Code should not have been a problem for me. As far as I know, it may have been:
import urllib.request
return urllib.request.urlopen(url).read().decode('utf-8')
But here in China, certain websites (youtube included) are blocked. So the above code doesn't apply to them.
My goal for Block_of_Code is to get the source code of any website, whether blocked or not.
I have searched on Google and found some codes using socks proxy, but none of them worked. For example, I wrote and tried the following code based on this article (having executed pip install PySocks).
import socket
import socks
import urllib.request
socks.set_default_proxy(socks.SOCKS5, "127.0.0.1", 2012)
socket.socket = socks.socksocket
return urllib.request.urlopen(url).read().decode('utf-8')
The error message is:
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
The reason for my searching for code using socks proxy is that I have always been using socks proxy service to visit blocked websites. By launching an app provided by my service provider, I am able to visit those websites using a web browser like Firefox. (My socks proxy port is 2012)
Nevertheless, any kind of solution is welcome, whether it is socks proxy or not, as long as it will enable me to get the source of any page.
I'm using Python 3.6.3 on Windows 10.
I currently use this for my connection to a socks5 proxy and paramiko.
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5,socks_hostname,socks_port, True, socks_username,socks_password)
paramiko.client.socket.socket = socks.socksocket
ssh = paramiko.SSHClient()
However, I was hoping to make some requests in python with requesocks and the same proxy settings for the paramiko and couldn't find anything talking about a username and password.
Additionally, all requests are done with a different socks connection each time, Global settings could get in the way of my other connections.
Any ideas on how this is done or if there is an alternative to this?
My current implementation uses python requests very heavily so it would be nice to transition from there to requesocks so I don't have to refactor everything.
Note: How to make python Requests work via socks proxy doesn't work as it doesn't use the socks5 authentication.
You can use PySocks:
pip install PySocks
Then in your python file:
import socket
import socks
import requests
socks.set_default_proxy(socks.SOCKS5, "127.0.0.1", 9050, True, 'socks5_user','socks_pass')
socket.socket = socks.socksocket
print(requests.get('http://ifconfig.me/ip').text)
It works for me. But I got another problem to use different socks5 proxy for different request sessions. If anyone has solution for this, please kindly contribute.
The modern way:
pip install -U requests[socks]
then
import requests
resp = requests.get('http://go.to',
proxies=dict(http='socks5://user:pass#host:port',
https='socks5://user:pass#host:port'))
I am trying to use tor to get a new IP every time I access a website:
import socks
import socket
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS4, '127.0.0.1', 9151, True)
socket.socket = socks.socksocket
import urllib2
print urllib2.urlopen("http://almien.co.uk/m/tools/net/ip/").read()
I have also tried port 9150, 9050 too.
I keep getting:
socks.ProxyConnectionError: Error connecting to SOCKS4 proxy 127.0.0.1:9151: [Errno 61] Connection refused
Use stem package to interact with Tor. Official site have many tutorials for different cases, for example:
https://stem.torproject.org/tutorials/to_russia_with_love.html
I am trying to use Tor with python and urllib2 and am stuck. The following
print opener.open('http://check.torproject.org/').read()
And
telnet 127.0.0.1 9051
gives me the following error:
514 Authentication Required.
Here is the code I want to use: But I receive the same 514 Authentication Error on the urllib2.urlopen call.
import urllib2
# using TOR !
proxy_support = urllib2.ProxyHandler({"http" : "127.0.0.1:9051"} )
opener = urllib2.build_opener(proxy_support)
urllib2.install_opener(opener)
# every urlopen connection will then use the TOR proxy like this one :
urllib2.urlopen('http://www.google.com').read()
Any suggestions on why this is occurring?
The Tor Vidalia browser -> settings -> Advanced: Authentication set to 'Randomly Generate'
I am Using Python 2.65 urllib2 Tor
Google search suggests (and Tor manual confirms) that 9051 is Tor's default control port. Actual proxy is running on port 9050 by default, which is the one you need to use. However, Vidalia does not use the default ports without extra configuration.
The other problem is that urllib2 is by default unable to work with SOCKS proxies. For possible solutions see these two questions.