I have created a python script connecting to a Phoenix HBase to analyze some data. I want to set this script up on crontab on a ubuntu server that I have running.
The script is perfectly able to run on my Windows 10 machine. But when I try to use the phoenixdb connector on Ubuntu I get an error on RunTime.
>>> import phoenixdb
>>> url = '<some-url>'
>>> conn = phoenixdb.connect(url, autocommit=True)
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.5/site-packages/phoenixdb/avatica.py", line 156, in connect
self.connection.connect()
File "/usr/lib/python3.5/http/client.py", line 849, in connect
(self.host,self.port), self.timeout, self.source_address)
File "/usr/lib/python3.5/socket.py", line 711, in create_connection
raise err
File "/usr/lib/python3.5/socket.py", line 702, in create_connection
sock.connect(sa)
TimeoutError: [Errno 110] Connection timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/.local/lib/python3.5/site-packages/phoenixdb/__init__.py", line 63, in connect
client.connect()
File "/home/ubuntu/.local/lib/python3.5/site-packages/phoenixdb/avatica.py", line 158, in connect
raise errors.InterfaceError('Unable to connect to the specified service', e)
phoenixdb.errors.InterfaceError: ('Unable to connect to the specified service', TimeoutError(110, 'Connection timed out'), None, None)
I was hoping someone here knows a way to fix this problem?
I am running Python 3.6 on Windows and Python 3.5.2 on Ubuntu, but I doubt that that is the problem.
EDIT:
I have now started a Windows 2012 Server and have tried setting my script up here as well, and it seems not to be a problem solely for Ubuntu. I am getting the exact same error on Windows.
>>> import phoenixdb
>>> url = '<some-url>'
>>> conn = phoenixdb.connect(url, autocommit=True)
Traceback (most recent call last):
File "C:\Users\Administrator\Anaconda3\lib\site-packages\phoenixdb\avatica.py", line 156, in connect
self.connection.connect()
File "C:\Users\Administrator\Anaconda3\lib\http\client.py", line 936, in connect
(self.host,self.port), self.timeout, self.source_address)
File "C:\Users\Administrator\Anaconda3\lib\socket.py", line 722, in create_connection raise err
File "C:\Users\Administrator\Anaconda3\lib\socket.py", line 713, in create_connection
sock.connect(sa)
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Administrator\Anaconda3\lib\site-packages\phoenixdb\__init__.py", line 63, in connect
client.connect()
File "C:\Users\Administrator\Anaconda3\lib\site-packages\phoenixdb\avatica.py", line 158, in connect
raise errors.InterfaceError('Unable to connect to the specified service', e)
phoenixdb.errors.InterfaceError: ('Unable to connect to the specified service',
TimeoutError(10060, 'A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond', None, 10060, None), None, None)
I recently did format the PC that I developed the script on. The was using this phoenixdb connector, and I did not experience a similar problem on that.
I did also try to install Python 3.6 on the Windows machine (similar to the same python version that I installed on my normal PC - the one I developed the script on).
I'm really lost for finding a solution..
I finally found the problem. It had nothing to do with the machines I set up my script on. It had to do with security settings on the AWS Machines that both the Ubuntu and Windows Servers were.
Related
I am absolute beginner with programming in general. I've tried to create a sql database.
Unfortunately this part of my code always returns an error:
import mysql.connector
MainDatabase = mysql.connector.connect(
host ="127.0.0.1",
user ="Username",
password ="Idon'tknowE"
The error being:
Traceback (most recent call last):
File "/Users/My Name/xx/Program/Scripts/test.py", line xx, in <module>
database="mydatabase"
File "/Users/My Name/Library/Python/3.7/lib/python/site-packages/mysql/connector/__init__.py", line 179, in connect
return MySQLConnection(*args, **kwargs)
File "/Users/My Name/Library/Python/3.7/lib/python/site-packages/mysql/connector/connection.py", line 95, in __init__
self.connect(**kwargs)
File "/Users/My Name/Library/Python/3.7/lib/python/site-packages/mysql/connector/abstracts.py", line 716, in connect
self._open_connection()
File "/Users/My Name/Library/Python/3.7/lib/python/site-packages/mysql/connector/connection.py", line 206, in _open_connection
self._socket.open_connection()
File "/Users/My Name/Library/Python/3.7/lib/python/site-packages/mysql/connector/network.py", line 512, in open_connection
errno=2003, values=(self.get_address(), _strioerror(err)))
mysql.connector.errors.InterfaceError: 2003: Can't connect to MySQL server on 'localhost:3306' (61 Connection refused)
I am running this on mac and more specifically Catalina. Anyone have an idea of what's going on or how to fix this?
MySQL is a separate application from python, that is supposed to run at the same time as your application. The code you wrote then connects you to it.
Since MySQL is not running on your computer your code cannot work. Either that or MySQL is running, but not at the adress you specified
You can start here to install it:
https://dev.mysql.com/doc/refman/5.7/en/installing.html
You will then need to setup a user with credentials that you'll use in your code, and then you need to ensure you are using the correct adress for where mysql is running.
I'm trying to deploy code to a server to which I don't have root access. So the solution I was thinking was to deploy using pyinstaller and staticx.
My code runs in python 3.7, in a nutshell, does something like:
import requests
response = requests.get('http://example.com/api/action')
# Do something with the response
When I run it in my environment, even after "building" with pyinstaller and staticx, it works flawlessly.
However, when I try to deploy it (the server is running Red Hat Enterprise Linux Server release 7.4, which does not have python 3), I get:
Traceback (most recent call last):
File "site-packages/urllib3/connection.py", line 160, in _new_conn
File "site-packages/urllib3/util/connection.py", line 57, in create_connection
File "socket.py", line 748, in getaddrinfo
socket.gaierror: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "site-packages/urllib3/connectionpool.py", line 603, in urlopen
File "site-packages/urllib3/connectionpool.py", line 355, in _make_request
File "http/client.py", line 1229, in request
File "http/client.py", line 1275, in _send_request
File "http/client.py", line 1224, in endheaders
File "http/client.py", line 1016, in _send_output
File "http/client.py", line 956, in send
File "site-packages/urllib3/connection.py", line 183, in connect
File "site-packages/urllib3/connection.py", line 169, in _new_conn
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f912ca96f28>: Failed to establish a new connection: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "site-packages/requests/adapters.py", line 449, in send
File "site-packages/urllib3/connectionpool.py", line 641, in urlopen
File "site-packages/urllib3/util/retry.py", line 399, in increment
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='someserver.com', port=80): Max retries exceeded with url: /api/action (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f912ca96f28>: Failed to establish a new connection: [Errno -2] Name or service not known'))
Everything works if I use IP addresses instead of domains. Unfortunately, it's not a possibility because I have to hit an proxy that uses hosts. Also IP addresses may change in the future.
The server resolves names perfectly fine if I use nslookup or ping.
Any ideas why this could be happening? Any alternatives on how I could deploy my code in a way that it works in the remote server may also be a valid answer. Note however:
I have no root access and requesting the installation of python3, any libs or any other package such as docker, etc would most likely be rejected by my customer
The server can resolve external names, such as google.com but does not have access to the outer internet (the code is ment to work on internal servers, with internal DNS such as someserver.mycustomer.example)
My goal is to have a basic FTP program written in Python. First, I need to gain more knowledge. My question is, how can I connect to an Ubuntu Server (hosted via VirtualBox) using Python?
I have tried using the page on the official Python website but I get an error saying socket.error: [Errno 61] Connection refused when using this
from ftplib import FTP
ftp = FTP('jordan#10.0.0.12')
This is the output I get when using ftp - FTP('10.0.0.12')
Traceback (most recent call last): File "", line 1, in
File
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ftplib.py",
line 120, in init
self.connect(host) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ftplib.py",
line 135, in connect
self.sock = socket.create_connection((self.host, self.port), self.timeout) File
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py",
line 575, in create_connection
raise err socket.error: [Errno 61] Connection refused
I can use an FTP program such as Transmit (the same port and on SFTP) on the same machine and it works fine.
Python's ftplib does not support SFTP, so using PySFTP would work.
An alternative is using paramiko.
I have a Python 3.5 script that pulls info from another Linux server. It must first connect through the 443 port to establish a connection. Unfortunately I get the following error message instead:
I can ping the Linux server but get the following error:
Traceback (most recent call last):
File "/opt/saddlesum/webapp_Py3/lib/python3.5/site- packages/requests/packages/urllib3/connectionpool.py", line 544, in urlopen
body=body, headers=headers)
File "/opt/saddlesum/webapp_Py3/lib/python3.5/site- packages/requests/packages/urllib3/connectionpool.py", line 341, in _make_request
self._validate_conn(conn)
File "/opt/saddlesum/webapp_Py3/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py", line 761, in _validate_conn
conn.connect()
File "/opt/saddlesum/webapp_Py3/lib/python3.5/site-packages/requests/packages/urllib3/connection.py", line 204, in connect
conn = self._new_conn()
File "/opt/saddlesum/webapp_Py3/lib/python3.5/site-packages/requests/packages/urllib3/connection.py", line 134, in _new_conn
(self.host, self.port), self.timeout, **extra_kw)
File "/opt/saddlesum/webapp_Py3/lib/python3.5/site-packages/requests/packages/urllib3/util/connection.py", line 88, in create_connection
raise err
File "/opt/saddlesum/webapp_Py3/lib/python3.5/site-packages/requests/packages/urllib3/util/connection.py", line 78, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
The problem was caused by a duplicate IP address in my /etc/hosts files. Once I deleted the incorrect entry everything started working.
Hi did you finally solved this issues. I m having the same trouble.
I Set Up a wagatail app with Postgres, Nginx, and Gunicorn on Ubuntu 20. And after the the configurations, here is what I get when I send a message from the contact app.
Internal server error
Sorry, there seems to be an error. Please try again soon.
I have sentry running and telling me the errors like this:
1- ConnectionRefusedError /{var}
New Issue
Unhandled
[Errno 111] Connection refused
2 - Invalid HTTP_HOST header: '${ip}:${port}'. The domain name
I ran Python program in Cygwin to connect to AWS, but failed consistently as being timed out. But my connection to AWS using aws cli in Cygwin always works. Also if I run the same python code in Windows, it also works. I have checked all the connection credentials which are the same for all.
Here is the error msg:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/site-packages/boto-2.38.0-py2.7.egg/boto/ec2/connection.py", line 585, in get_all_instances
max_results=max_results)
File "/usr/lib/python2.7/site-packages/boto-2.38.0-py2.7.egg/boto/ec2/connection.py", line 681, in get_all_reservations
[('item', Reservation)], verb='POST')
File "/usr/lib/python2.7/site-packages/boto-2.38.0-py2.7.egg/boto/connection.py", line 1170, in get_list
response = self.make_request(action, params, path, verb)
File "/usr/lib/python2.7/site-packages/boto-2.38.0-py2.7.egg/boto/connection.py", line 1116, in make_request
return self._mexe(http_request)
File "/usr/lib/python2.7/site-packages/boto-2.38.0-py2.7.egg/boto/connection.py", line 1030, in _mexe
raise ex
socket.error: [Errno 116] Connection timed out
I have found out that the culprit lies in the proxy credentials.
I put HTTP_PROXY and HTTPS_PROXY as Windows Environment Variables. However, when run in Cygwin, boto uses 'http_proxy' to match without considering the case of the word
(see /boto/connection.py(669)handle_proxy()
line 669: if 'http_proxy' in os.environ and not self.proxy:).
When I changed the capital case HTTP_PROXY to lower case http_proxy, then it worked. Not sure why it isn't a problem if I run with Python in Windows.