Running Python script on Visual Studio local website - python

Hello I have a script that I run on my organizations internal network, but it was supposed to run on the 1st but it didn't so I did a backup on my local database of the data so that I can run the script to have the correct data. I changed the url so it lines up with my local site but it is not working as I get an error of
HTTPSConnectionPool(host='localhost', port=44345): Max retries exceeded with url: /logon.aspx (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fc26acb86a0>: Failed to establish a new connection: [Errno 111] Connection refused',)
Here is how I set it up my script to access the url
URL = "https://localhost:44345/logon.aspx"
headers = {"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36"}
username="script"
password="password"
s = Session()
s.verify = False
s.headers.update(headers)
r = s.get(URL)
Why is my connection being refused? I can browse to the site through my internet browser so why am I getting a connection refused?

Since your are running on localhost try http protocol instead of https.

Related

How to access site with requests and SOCKS5 with Python 3

I'm using requests in Python 3.8 in order to connect to an Amazon web page.
I'm also using tor, in order to connect via SOCKS5.
This is the relevant piece of code:
session = requests.session()
session.headers.update({'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) '
'Chrome/44.0.2403.157 Safari/537.36'})
anon = {'http': "socks5://localhost:9050", 'https': "socks5://localhost:9050"}
r = session.get("myurl", proxies=anon)
print(r.content)
However, it doesn't work. It gives me the Amazon 503 error. What I need to know is if there is some method to bypass this problem or it depends on a sort of "ip blocking".
Thank you

Requests.get does not work, Failed to establish a new connection

I am trying to a web-scraping. Firstly the code was working but later it does not. The code is
import requests
import hashlib
from bs4 import BeautifulSoup
def sha512(x):
m = hashlib.sha512(x.encode())
return m.hexdigest()
session = requests.Session()
session.cookies["user-agent"] = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.117 Safari/537.36"
r = session.post("https://ringzer0ctf.com/login", data={"username":"myusername","password":"mypass"})
r = session.get("https://ringzeractf.com/challenges/13")
soup = BeautifulSoup(r.text, 'html.parser')
It gives error like
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='ringzeractf.com', port=443): Max retries exceeded
with url: /challenges/13 (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x04228490>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
Your URL in the GET request is wrong. Change ringzeractf to ringzer0ctf

"Connection aborted" error when using Python Requests

I'm trying to scrape price information from Carrefour website with the following code
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36',
}
link = 'https://www.carrefour.es/extensor-tp-link-rango-75/366506176/p'
response = requests.get(link, headers=headers, timeout=60)
# Following lines parse response with Beautifulsoup
This code ends up with "Connection aborted" error. The error message is sligted different on Linux and Windows
When run on AWS (Ubuntu), it always result in error with below message
requests.exceptions.ConnectionError: ('Connection aborted.', BadStatusLine("''",))
While run on my PC (Win10), sometimes it works correctly, sometimes error occurs with message below
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))
I guess this is not caused by IP restriction or User-Agent: Although the script result in no connect, I can still visit link using my PC (Win10), with same IP and User-Agent.
Am I blocked? What's the possible reason of being blocked?

Python3 requests ConnectionResetError(10054) when opening a picture

I was trying to download pictures from websites like 'http://xxx.jpg'.
The code:
headers={'user-agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.139 Safari/537.36'}
url='http://xxx.jpg'
response = requests.get(url,headers=headers)
downloadFunction()
The error writes:
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None))
Error occurred at the first request, so it wasn't the request frequency which had caused the error. And I could still open the websites using a browser, so I just needed the code to act more like a browser. How can I achieve that besides setting the user-agent?
I know it isn't your case, and this is really old, but when searching google I stumbled on this, so I'll leave what solved my problem here:
test_link = "https://www.bbb.org/washington-dc-eastern-pa/business-reviews/online-education/k12-inc-in-herndon-va-190911943/#sealclick"
page = requests.get(test_link)
I got the error:
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None))
So it isn't multiple connections, I think the problem was the headers, once I put headers the error disappeared, this is the code afterwards:
test_link = "https://www.bbb.org/washington-dc-eastern-pa/business-reviews/online-education/k12-inc-in-herndon-va-190911943/#sealclick"
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:66.0) Gecko/20100101 Firefox/66.0",
"Accept-Encoding": "*",
"Connection": "keep-alive"
}
page = requests.get(test_link, headers=headers)
I had this error when the server was hosted on my machine over https and the SSL certificate was not correctly installed.
Following instructions to properly install the server's certificates solved the problem:
https://coderead.wordpress.com/2014/08/07/enabling-ssl-for-self-hosted-nancy/
https://www.cloudinsidr.com/content/how-to-install-the-most-recent-version-of-openssl-on-windows-10-in-64-bit/
For me, I had to add the headers with the Content-Type and accept (since those two fields were required from the API) and everything worked fine :).
headers = {
'Content-Type': 'application/json',
'accept': 'application/json',
}
result = requests.post(environ.get('customer_api_url'),
headers = headers,
json=lead)

Requests.Get fails on one server but works on another

I have the following Python 2.7 code:
import requests
from urllib3 import Retry
s = requests.Session()
http_retries = Retry(3)
https_retries = Retry(3)
http = requests.adapters.HTTPAdapter(max_retries=http_retries)
https = requests.adapters.HTTPAdapter(max_retries=https_retries)
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36',}
s.mount('http://', http)
s.mount('https://', https)
response = s.get(URL, headers=headers, timeout=10)
I keep getting
Failed to establish a new connection: [Errno 101] Network is unreachable'
when I run script from Amazon AWS Instance but on another network it works fine.
Any idea why

Categories

Resources