I have a couchbase 6.0 server running on linode and I'm using the python SDK to insert data into my couchbase bucket. When run directly on the Linode server, my data gets inserted.
However, when I run my code from a remote machine I get network error:
CouchbaseNetworkError, CouchbaseTransientError): <RC=0x2C[The remote host refused the connection.
I have ports 8091, 8092, 8093, 8094 open on linode.
from couchbase.cluster import Cluster
from couchbase.cluster import PasswordAuthenticator
# linode ip: 1.2.3.4
cluster = Cluster('couchbase://1.2.3.4:8094')
cluster.authenticate(PasswordAuthenticator('admin', 'password'))
bucket = cluster.open_bucket('test_bucket')
bucket.upsert('1',{"foo":"bar"})
My code executes when run on the server with couchbase://localhost but it fails when run from a remote machine. is there any port or configuration I am missing?
Client-to-node: Between any clients/app-servers/SDKs and all nodes of each cluster they require access to.
Unencrypted*: 8091-8096, 11210, 11211
Encrypted: 18091-18096†††, 11207
using ports 11210 and 11211 worked for me. source
Related
I am relatively new to this topic. I try to build a webapp using flask. The webapp uses data from a postgresql database which is running local (Mac OS Monterey 12.2.1).
My application uses a python code which accesses data from the database by connecting to the database with psycopg2:
con = psycopg2.connect(
host = "192.168.178.43"
database = self.database,
port = "5432",
user = "user",
password = "password")
I already added the relevant entries to the "pg_hba.conf" file and to the "postgresql.conf" file to the needed configurations for an access in my home network. But i still got an error when starting the container. The app runs perfect outside the container. I think I miss some important steps to complete a successful connection.
This error is the following
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
So I'm using a bastion host/SSH tunnel to connect from my local computer to AWS Neptune.
ssh -N -i /Users/user1/.ssh/id_rsa -L 8182:my.xxx.us-east-1.neptune.amazonaws.com:8182 user1#transporter-int.mycloud.com
I did a simple Neptune connection test with gremlin.
from gremlin_python.process.graph_traversal import __
from gremlin_python.structure.graph import Graph
from gremlin_python.process.strategies import *
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
from gremlin_python.process.traversal import T
graph = Graph()
wss = 'wss://{}:{}/gremlin'.format('localhost', 8182)
remoteConn = DriverRemoteConnection(wss, 'g')
g = graph.traversal().withRemote(remoteConn)
print(g.V().limit(2).toList())
remoteConn.close()
And getting this error:
*aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host
localhost:8182 ssl:True [SSLCertVerificationError: (1, "[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'localhost'. (_ssl.c:1124)")]*
With #Taylor Riggan's suggestion, I update the /etc/hosts on my mac to the following:
Switched to use Python version 3.6.12, and gremlin-python version 3.4.10
127.0.0.1 localhost my.cluster-xxx.us-east-1.neptune.amazonaws.com
ran the following command to flush the hosts setting
sudo dscacheutil -flushcache
updated this line in the source code
wss = 'wss://{}:{}/gremlin'.format('my.cluster-xxx.us-east-1.neptune.amazonaws.com', 8182).
and now getting the following error, and the tornado version 4.5.3
File "/Users/user1/myproj/tests/graph/venv/lib/python3.6/site-packages/gremlin_python/driver/client.py", line 148, in submitAsync
return conn.write(message)
File "/Users/user1/myproj/tests/graph/venv/lib/python3.6/site-packages/gremlin_python/driver/connection.py", line 55, in write
self.connect()
File "/Users/user1/myproj/tests/graph/venv/lib/python3.6/site-packages/gremlin_python/driver/connection.py", line 45, in connect
self._transport.connect(self._url, self._headers)
File "/Users/user1/myproj/tests/graph/venv/lib/python3.6/site-packages/gremlin_python/driver/tornado/transport.py", line 41, in connect
lambda: websocket.websocket_connect(url, compression_options=self._compression_options))
File "/Users/user1/myproj/tests/graph/venv/lib/python3.6/site-packages/tornado/ioloop.py", line 576, in run_sync
return future_cell[0].result()
tornado.httpclient.HTTPClientError: HTTP 403: Forbidden
Easiest workaround for this is to add an entry in your /etc/hosts file on your dev desktop to resolve the Neptune endpoint to localhost. Then the cert validation should go through.
Ex:
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost myneptune-cluster.region.neptune.amazonaws.com
255.255.255.255 broadcasthost
::1 localhost
Check the guide Connect to AWS Neptune from the local system
Connect to AWS Neptune from the local system
There are many ways to connect to Amazon Neptune from outside of the VPC, such as setting up a load balancer or VPC peering.
Amazon Neptune DB clusters can only be created in an Amazon Virtual Private Cloud (VPC). One way to connect to Amazon Neptune from outside of the VPC is to set up an Amazon EC2 instance as a proxy server within the same VPC. With this approach, you will also want to set up an SSH tunnel to securely forward traffic to the VPC.
Part 1: Set up a EC2 proxy server.
Launch an Amazon EC2 instance located in the same region as your Neptune cluster. In terms of configuration, Ubuntu can be used. Since this is a proxy server, you can choose the lowest resource settings.
Make sure the EC2 instance is in the same VPC group as your Neptune cluster. To find the VPC group for your Neptune cluster, check the console under Neptune > Subnet groups. The instance's security group needs to be able to send and receive on port 22 for SSH and port 8182 for Neptune. See below for an example security group setup.
Image
Lastly, make sure you save the key-pair file (.pem) and note the directory for use in the next step.
Part 2: Set up an SSH tunnel.
This step can vary depending on if you are running Windows or MacOS.
Modify your hosts file to map localhost to your Neptune endpoint.
Windows: Open the hosts file as an Administrator (C:\Windows\System32\drivers\etc\hosts)
MacOS: Open Terminal and type in the command: sudo nano /etc/hosts
Add the following line to the hosts file, replacing the text with your Neptune endpoint address.
127.0.0.1 localhost YourNeptuneEndpoint
Open Command Prompt as an Administrator for Windows or Terminal for MacOS and run the following command. For Windows, you may need to run SSH from C:\Users\YourUsername\
ssh -i path/to/keypairfilename.pem ec2-user#yourec2instanceendpoint -N -L 8182:YourNeptuneEndpoint:8182
The -N flag is set to prevent an interactive bash session with EC2 and to forward ports only. An initial successful connection will ask you if you want to continue connecting? Type yes and enter.
To test the success of your local graph-notebook connection to Amazon Neptune, open a browser and navigate to:
https://YourNeptuneEndpoint:8182/status
You should see a report, similar to the one below, indicating the status and details of your specific cluster:
{
"status": "healthy",
"startTime": "Wed Nov 04 23:24:44 UTC 2020",
"dbEngineVersion": "1.0.3.0.R1",
"role": "writer",
"gremlin": {
"version": "tinkerpop-3.4.3"
},
"sparql": {
"version": "sparql-1.1"
},
"labMode": {
"ObjectIndex": "disabled",
"DFEQueryEngine": "disabled",
"ReadWriteConflictDetection": "enabled"
}
}
Close Connection
When you're ready to close the connection, use Ctrl+D to exit.
i can connect locally to my mongodb server with the address 0.0.0.0/0. However, when I deploy my code to the cloud I get the error deploy to google cloud function.
google cloud function with python 3.7 (beta)
atlas mongo db
python lib:
-pymongo
-dnspython
Error: function crashed. Details:
All nameservers failed to answer the query _mongodb._tcp.**-***.gcp.mongodb.net. IN SRV: Server ***.***.***.*** UDP port 53 answered SERVFAIL
Traceback (most recent call last): File "/env/local/lib/python3.7/site-packages/pymongo/uri_parser.py", line 287, in _get_dns_srv_hosts results = resolver.query('_mongodb._tcp.' + hostname, 'SRV') File "/env/local/lib/python3.7/site-packages/dns/resolver.py", line 1132, in query raise_on_no_answer, source_port) File "/env/local/lib/python3.7/site-packages/dns/resolver.py", line 947, in query raise NoNameservers(request=request, errors=errors) dns.resolver.NoNameservers: All nameservers failed to answer the query _mongodb._tcp.**mymongodb**-r091o.gcp.mongodb.net. IN SRV: Server ***.***.***.*** UDP port 53
finally after stuck 2 day, goblok banget semaleman
just change connection
from
SRV connection string (3.6+ driver)
to
Standard connection string (3.4+ driver)
mongodb://<USERNAME>:<PASSWORD>#<DATABASE>-shard-00-00-r091o.gcp.mongodb.net:27017,<COLLECTION>-shard-00-01-r091o.gcp.mongodb.net:27017,<COLLECTION>-shard-00-02-r091o.gcp.mongodb.net:27017/test?ssl=true&replicaSet=<COLLECTION>-shard-0&authSource=admin&retryWrites=true
or you can see your connection string in atlas mongodb.
idk why can`t connect with srv connection string in google cloud functions, maybe not suppot now, or just misconfiguration.
This might help somone, I've struggled with this error on Windows when I tried to connect to MongoDB with the mongodb+srv:// syntax, it worked fine if I try it through WSL or on a Linux machine.
From https://forum.omz-software.com/topic/6751/pymongo-errors-configurationerror-resolver-configuration-could-not-be-read-or-specified-no-nameservers/4
import dns.resolver
dns.resolver.default_resolver=dns.resolver.Resolver(configure=False)
dns.resolver.default_resolver.nameservers=['8.8.8.8'] # this is a google public dns server, use whatever dns server you like here
# as a test, dns.resolver.query('www.google.com') should return an answer, not an exception
Most of the time the error occurs when your network isn't configured with 8.8.8.8 DNS Adress.
Type the following command on command prompt to verify
ipconfig /all
check your IP configuration & in the config you should see the following
DNS Servers . . . . . . . . . . . : 8.8.8.8
8.8.4.4
if not you should configure your network using 8.8.8.8 as primary DNS and 8.8.4.4 as secondary.
OR
May be you are using a VPN or private DNS.In that case contact you service provider for the assistance.
I want to connect to MySQL inside Docker container. I have a running instance of MySQL in a docker container. Since I already have 3306 port busy on my host, I decide to use port 8081 to start my MySQL container. Basically, I started my container with docker run -p 8080:80 -p 8081:3306 --name test test. When I connect to my container, I can connect to my MySQL without error. On the other hand, I have a web app that is able to connect to MySQL with the exact same port (8081) from my host. This means that MySQL is working properly and is reachable from outside. But now in my python script, I cannot connect. I am unable to connect to MySQL with CLI either. It seems like the port number is simply not interpreted. I see that if use for example mysql -P 8081 -u root -p. This is just trying to connect to host MySQL (port 3306) instead of container MySQL on port 8081(when I enter host MySQL credentials, it connect to host MySQL). In my python script, I used this: conn = MySQLdb.connect( host = 'localhost', port=8081, user='root', passwd=''). But this is not working either. In the MySQL man page, I see this :
· --port=port_num, -P port_num
The TCP/IP port number to use for the connection.
What am I doing wrong, please?
mysql --version:
mysql Ver 14.14 Distrib 5.7.18, for Linux (x86_64) using EditLine wrapper
update
here is my docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b370c91594d3 test "/bin/sh -c /start.sh" 14 hours ago Up 14 hours 8080/tcp, 0.0.0.0:8080->80/tcp, 0.0.0.0:8081->3306/tcp test
I'm trying to work with Hbase, thrift, and python on a remote machine(Ubuntu) from Eclise RSE on windows. Everything is working fine but when I'm trying to connect to the localhost I get an error:
thrift.transport.TTransport.TTransportException: Could not connect to localhost:9090
If I run this code via ssh terminal on a remote machine, it works perfectly.
Here is my code:
#!/usr/bin/env python
import sys, glob
sys.path.append('gen-py')
sys.path.insert(0, glob.glob('lib/py/build/lib.*')[0])
from thrift import Thrift
from thrift.transport import TSocket
from thrift.transport import TTransport
from thrift.protocol import TBinaryProtocol
from hbase import Hbase
# Connect to HBase Thrift server
transport = TTransport.TBufferedTransport(TSocket.TSocket('localhost', 9090))
protocol = TBinaryProtocol.TBinaryProtocolAccelerated(transport)
# Create and open the client connection
client = Hbase.Client(protocol)
transport.open()
tables = client.getTableNames()
print(tables)
# Do Something
transport.close()
Do you know what localhost means? It means the machine you're running the command on. E.g. if I type http://localhost:8080/ in a browser on my PC then it will call the server running on port 8080 on MY machine.
I'm sure your connection worked fine if you tried connecting to localhost while on the same box. If connecting from a different machine then you'll need to know the IP address or the hostname of the box you're connecting to.