We have written a piece of code in python script using pymongo that connects to mongodb.
username = 'abc'
password = 'xxxxxx'
server = 'dns name of that server'
port = 27017
In program, the code looks like:
import pymongo
from pymongo import MongoClient
client = MongoClient(url, serverSelectionTimeoutMS=300)
database = client.database_name
data_insert = database.collection_name.insert_one({'id': 1, 'name': xyz})
When I tried to do these operations, it raises an error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/pymongo/cursor.py", line 1114, in next
if len(self.__data) or self._refresh():
File "/usr/local/lib/python2.7/dist-packages/pymongo/cursor.py", line 1036, in _refresh
self.__collation))
File "/usr/local/lib/python2.7/dist-packages/pymongo/cursor.py", line 873, in __send_message
**kwargs)
File "/usr/local/lib/python2.7/dist-packages/pymongo/mongo_client.py", line 905, in _send_message_with_response
exhaust)
File "/usr/local/lib/python2.7/dist-packages/pymongo/mongo_client.py", line 916, in _reset_on_error
return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pymongo/server.py", line 99, in send_message_with_response
with self.get_socket(all_credentials, exhaust) as sock_info:
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/usr/local/lib/python2.7/dist-packages/pymongo/server.py", line 168, in get_socket
with self.pool.get_socket(all_credentials, checkout) as sock_info:
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/usr/local/lib/python2.7/dist-packages/pymongo/pool.py", line 792, in get_socket
sock_info.check_auth(all_credentials)
File "/usr/local/lib/python2.7/dist-packages/pymongo/pool.py", line 512, in check_auth
auth.authenticate(credentials, self)
File "/usr/local/lib/python2.7/dist-packages/pymongo/auth.py", line 470, in authenticate
auth_func(credentials, sock_info)
File "/usr/local/lib/python2.7/dist-packages/pymongo/auth.py", line 450, in _authenticate_default
return _authenticate_scram_sha1(credentials, sock_info)
File "/usr/local/lib/python2.7/dist-packages/pymongo/auth.py", line 201, in _authenticate_scram_sha1
res = sock_info.command(source, cmd)
File "/usr/local/lib/python2.7/dist-packages/pymongo/pool.py", line 419, in command
collation=collation)
File "/usr/local/lib/python2.7/dist-packages/pymongo/network.py", line 116, in command
parse_write_concern_error=parse_write_concern_error)
File "/usr/local/lib/python2.7/dist-packages/pymongo/helpers.py", line 210, in _check_command_response
raise OperationFailure(msg % errmsg, code, response)
pymongo.errors.OperationFailure: Authentication failed.
In MongoDB, while performing queries we are getting the responses normally, without raising any errors.
Because the other answers to your question didn't work for me, I'm going to copy and paste my answer from a similar question.
If you've tried the above answers and you're still getting an error:
pymongo.errors.OperationFailure: Authentication failed.
There's a good chance you need to add ?authSource=admin to the end of your uri.
Here's a working solution that I'm using with MongoDB server version 4.2.6 and MongoDB shell version v3.6.9.
from pymongo import MongoClient
# Replace these with your server details
MONGO_HOST = "XX.XXX.XXX.XXX"
MONGO_PORT = "27017"
MONGO_DB = "database"
MONGO_USER = "admin"
MONGO_PASS = "pass"
uri = "mongodb://{}:{}#{}:{}/{}?authSource=admin".format(MONGO_USER, MONGO_PASS, MONGO_HOST, MONGO_PORT, MONGO_DB)
client = MongoClient(uri)
Similar fix for command line is adding --authenticationDatabase admin
Well, I have been stuck with the same error for almost 3-4 hours. I came across solution with the following steps:
from your shell connect to MongoDB by typing: mongo
afterwards, create a database: use test_database
Now create a user with the following command with readWrite and dbAdmin privileges.
db.createUser(
{
user: "test_user",
pwd: "testing12345",
roles: [ "readWrite", "dbAdmin" ]
}
);
This will prompt Successfully added user: { "user" : "test_user", "roles" : [ "readWrite", "dbAdmin" ] }
you can check by typing: show users.
It will also show you DB name you created before in the json.
now you should be able to insert data to your database:
client = MongoClient("mongodb://test_user:myuser123#localhost:27017/test_database")
db = client.test_database
data = {"initial_test":"testing"}
db["my_collection"].insert_one(data).inserted_id
I ran into this error, and my problem was with the password.
I had what I believe to be a special character in the Master account. Changing the password to be only alphanumeric fixed it for me.
Code snippet
client = pymongo.MongoClient(
'mongodb://username:alphaNumericPassword#localhost:27017/?ssl=true&ssl_ca_certs=rds-combined-ca-bundle.pem&replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=false'
)
# Specify the database to be used
db = client['prod-db']
Related
I have created a Cloud SQL instance, and a new database. However, I can't seem to connect to the database using Cloud SQL Python Connector. I have followed the sample code and steps in the documentation, but still failed.
Error:
Traceback (most recent call last):
File "c:\Users\my_name\my_path\gl_cloudsql_mysql_fx.py", line 149, in <module>
print(bnm_data_db.isTableExist('MY_TABLE'))
File "c:\Users\my_name\my_path\gl_cloudsql_mysql_fx.py", line 81, in isTableExist
with self.__pool.connect() as db_conn:
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\engine\base.py", line 3245, in connect
return self._connection_cls(self)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\engine\base.py", line 145, in __init__
self._dbapi_connection = engine.raw_connection()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\engine\base.py", line 3269, in raw_connection
return self.pool.connect()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 455, in connect
return _ConnectionFairy._checkout(self)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 1270, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 719, in checkout
rec = pool._do_get()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\impl.py", line 168, in _do_get
with util.safe_reraise():
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\util\langhelpers.py", line 147, in __exit__
raise exc_value.with_traceback(exc_tb)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\impl.py", line 166, in _do_get
return self._create_connection()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 396, in _create_connection
return _ConnectionRecord(self)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 681, in __init__
self.__connect()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 905, in __connect
with util.safe_reraise():
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\util\langhelpers.py", line 147, in __exit__
raise exc_value.with_traceback(exc_tb)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 901, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 368, in <lambda>
return lambda rec: creator_fn()
File "c:\Users\my_name\my_path\gl_cloudsql_mysql_fx.py", line 27, in getconn
conn: pymysql.connections.Connection = self.__connector.connect(
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\google\cloud\sql\connector\connector.py", line 154, in connect
return connect_task.result()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\concurrent\futures\_base.py", line 458, in result
return self.__get_result()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\concurrent\futures\_base.py", line 403, in __get_result
raise self._exception
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\google\cloud\sql\connector\connector.py", line 261, in connect_async
return await asyncio.wait_for(get_connection(), timeout)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\asyncio\tasks.py", line 445, in wait_for
return fut.result()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\google\cloud\sql\connector\connector.py", line 257, in get_connection
return await self._loop.run_in_executor(None, connect_partial)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\google\cloud\sql\connector\pymysql.py", line 54, in connect
socket.create_connection((ip_address, SERVER_PROXY_PORT)),
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\socket.py", line 845, in create_connection
raise err
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\socket.py", line 833, in create_connection
sock.connect(sa)
ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it
I run this python code on locally on my machine (in a Conda environment).
Here's my code:
from google.cloud.sql.connector import Connector
import sqlalchemy
import pymysql
connector = Connector()
# function to return the database connection
def getconn() -> pymysql.connections.Connection:
conn: pymysql.connections.Connection = connector.connect(
"my_project:my_location:my_instance",
"pymysql",
user="user",
password="password",
db="my_database",
)
return conn
# create connection pool
pool = sqlalchemy.create_engine(
"mysql+pymysql://",
creator=getconn,
)
# insert statement
check_table_statement = sqlalchemy.text(
"SELECT count(*) from information_schema.tables",
)
with pool.connect() as db_conn:
result = db_conn.execute(check_table_statement)
print(result)
connector.close()
I have tried to include my IP address in the Cloud SQL instance. However, I am not sure which one is the correct address. I have check my IP address multiple ways, but all of them are different:
Google
whatsmyip.org
cmd command nslookup myip.opendns.com resolver1.opendns.com
No luck with all of them.
Note:
I did not check the Allow only SSL connections box. So, I reckon, there is no need to include the CA certificate.
The CloudSQL instance is using Public IP.
Update:
I have added full error.
To solve this, check the following -
Make sure you enabled Public IP on the Cloud SQL instance
Then, create a user with hosts allowed as % (all IPs).
Next, ensure that you have a database created by the name of my_database by clicking on the Databases button on the left navigation inside Cloud SQL dashboard.
Ensure that you are including the right service credentials in your script with Cloud SQL Client role.
You should be able to connect. Do let me know if you face any issues, will update the answer accordingly :)
A couple things for you to try in order to resolve the ConnectionRefusedError.
Make sure the Cloud SQL Admin API is enabled within your Google Cloud Project.
Make sure the IAM principal you are using for authorization within your environment has the Cloud SQL Client Role granted to it.
Double check that your instance connection name is correct (first argument being passed to the .connect method) by verifying it on the Cloud SQL instance overview page.
If you are connecting to a Cloud SQL instance Private IP make sure you are connecting from a machine within the VPC network and that your code is updated for private IP connections:
from google.cloud.sql.connector import Connector, IPTypes
import sqlalchemy
import pymysql
connector = Connector()
# function to return the database connection
def getconn() -> pymysql.connections.Connection:
conn: pymysql.connections.Connection = connector.connect(
"my_project:my_location:my_instance",
"pymysql",
user="user",
password="password",
db="my_database",
ip_type=IPTypes.PRIVATE,
)
return conn
Let me know if any of these help resolve your connection, otherwise I can take a deeper look into it for you :)
In trying to build off of this SO post, I've built the below Python script. It should connect to a remote MongoDB (actually a VM on my local machine), run a simple find() operation, then print the results:
from pymongo import MongoClient
MONGO_HOST = "172.17.0.2"
MONGO_PORT = "27017"
MONGO_DB = "myDB"
MONGO_USER = "me"
MONGO_PASS = "password01"
uri = "mongodb://{}:{}#{}:{}/{}?authSource=admin".format(MONGO_USER, MONGO_PASS, MONGO_HOST, MONGO_PORT, MONGO_DB)
client = MongoClient(uri)
myDB = client['myDB']
myTable = myDB['TableA']
limited_cursor = myTable.find({ "data1": "KeyStringHere" }, { "_id": 0, "data2" : 1 }).limit(2)
for document in limited_cursor:
print(document)
When run, the code vomits out a lot of errors:
me#ubuntu1:~/$ /usr/bin/python3 myPythonScript.py
Traceback (most recent call last):
File "myPythonScript.py", line 14, in <module>
for document in limited_cursor:
File "/usr/lib/python3/dist-packages/pymongo/cursor.py", line 1169, in next
if len(self.__data) or self._refresh():
File "/usr/lib/python3/dist-packages/pymongo/cursor.py", line 1085, in _refresh
self.__send_message(q)
File "/usr/lib/python3/dist-packages/pymongo/cursor.py", line 924, in __send_message
**kwargs)
File "/usr/lib/python3/dist-packages/pymongo/mongo_client.py", line 1029, in _send_message_with_response
exhaust)
File "/usr/lib/python3/dist-packages/pymongo/mongo_client.py", line 1040, in _reset_on_error
return func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/pymongo/server.py", line 85, in send_message_with_response
with self.get_socket(all_credentials, exhaust) as sock_info:
File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/usr/lib/python3/dist-packages/pymongo/server.py", line 138, in get_socket
with self.pool.get_socket(all_credentials, checkout) as sock_info:
File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/usr/lib/python3/dist-packages/pymongo/pool.py", line 920, in get_socket
sock_info.check_auth(all_credentials)
File "/usr/lib/python3/dist-packages/pymongo/pool.py", line 609, in check_auth
auth.authenticate(credentials, self)
File "/usr/lib/python3/dist-packages/pymongo/auth.py", line 486, in authenticate
auth_func(credentials, sock_info)
File "/usr/lib/python3/dist-packages/pymongo/auth.py", line 466, in _authenticate_default
return _authenticate_scram_sha1(credentials, sock_info)
File "/usr/lib/python3/dist-packages/pymongo/auth.py", line 209, in _authenticate_scram_sha1
res = sock_info.command(source, cmd)
File "/usr/lib/python3/dist-packages/pymongo/pool.py", line 517, in command
collation=collation)
File "/usr/lib/python3/dist-packages/pymongo/network.py", line 125, in command
parse_write_concern_error=parse_write_concern_error)
File "/usr/lib/python3/dist-packages/pymongo/helpers.py", line 145, in _check_command_response
raise OperationFailure(msg % errmsg, code, response)
pymongo.errors.OperationFailure: Authentication failed.
me#ubuntu1:~/$
I'm certain the code is crashing on the last line of the script: for document in limited_cursor: print(document) If I comment out those last two lines, the script runs without error... but generates no output.
However, a careful reading of the traceback errors says the root problem is Authentication failed. So what does that mean? Is my URI incorrect?
I'm pretty sure the URI is correct. When I modify the script to print it to the console:
print(uri)
It comes out:
mongodb://me:password01#172.17.0.2:27017/myDB?authSource=admin
That all looks right to me. The IP address and port are def correct, as is the database name. I've also created a me/password01 user within Mongo. From within the Mongo Shell:
> db.getUsers()
[
{
"_id" : "test.me",
"userId" : UUID("5c3b4f1a-d05f-4015-9f6d-0b6cbbd7026c"),
"user" : "me",
"db" : "test",
"roles" : [
{
"role" : "userAdminAnyDatabase",
"db" : "admin"
}
],
"mechanisms" : [
"SCRAM-SHA-1",
"SCRAM-SHA-256"
]
}
]
>
Any advice or tips is appreciated, thank you.
userAdminAnyDatabase role does not allow to search data, it's to manage users.
You need readWriteAnyDatabase role
You created role in the test database but authenticating against admin database: ?authSource=admin.
So either change you to ?authSource=test or (recommended) create the user in admin database. In mongo shell:
> use admin;
> db.createUser( { user: "me" ....})
I'm trying to use Celery with SQS as broker. In order to use the SQS from my container I need to assume a role and for that I'm using STS. My code looks like this:
role_info = {
'RoleArn': 'arn:aws:iam::xxxxxxx:role/my-role-execution',
'RoleSessionName': 'roleExecution'
}
sts_client = boto3.client('sts', region_name='eu-central-1')
credentials = sts_client.assume_role(**role_info)
aws_access_key_id = credentials["Credentials"]['AccessKeyId']
aws_secret_access_key = credentials["Credentials"]['SecretAccessKey']
aws_session_token = credentials["Credentials"]["SessionToken"]
os.environ["AWS_ACCESS_KEY_ID"] = aws_access_key_id
os.environ["AWS_SECRET_ACCESS_KEY"] = aws_secret_access_key
os.environ["AWS_DEFAULT_REGION"] = 'eu-central-1'
os.environ["AWS_SESSION_TOKEN"] = aws_session_token
broker = "sqs://"
backend = 'redis://redis-service:6379/0'
celery = Celery('tasks', broker=broker, backend=backend)
celery.conf["task_default_queue"] = 'my-queue'
celery.conf["broker_transport_options"] = {
'region': 'eu-central-1',
'predefined_queues': {
'my-queue': {
'url': 'https://sqs.eu-central-1.amazonaws.com/xxxxxxx/my-queue'
}
}
}
In the same file I have the following task:
#celery.task(name='my-queue.my_task')
def my_task(content) -> int:
print("hello")
return 0
When I execute the following code I get an error:
[2020-09-24 10:38:03,602: CRITICAL/MainProcess] Unrecoverable error: ClientError('An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://eu-central-1.queue.amazonaws.com/ is denied.',)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 921, in create_channel
return self._avail_channels.pop()
IndexError: pop from empty list
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/celery/worker/worker.py", line 208, in start
self.blueprint.start(self)
File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 369, in start
return self.obj.start()
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 318, in start
blueprint.start(self)
File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/connection.py", line 23, in start
c.connection = c.connect()
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 405, in connect
conn = self.connection_for_read(heartbeat=self.amqheartbeat)
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 412, in connection_for_read
self.app.connection_for_read(heartbeat=heartbeat))
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 439, in ensure_connected
callback=maybe_shutdown,
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 422, in ensure_connection
callback, timeout=timeout)
File "/usr/local/lib/python3.6/site-packages/kombu/utils/functional.py", line 341, in retry_over_time
return fun(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 275, in connect
return self.connection
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 823, in connection
self._connection = self._establish_connection()
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 778, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 941, in establish_connection
self._avail_channels.append(self.create_channel(self))
File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 923, in create_channel
channel = self.Channel(connection)
File "/usr/local/lib/python3.6/site-packages/kombu/transport/SQS.py", line 100, in __init__
self._update_queue_cache(self.queue_name_prefix)
File "/usr/local/lib/python3.6/site-packages/kombu/transport/SQS.py", line 105, in _update_queue_cache
resp = self.sqs.list_queues(QueueNamePrefix=queue_name_prefix)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 337, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 656, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://eu-central-1.queue.amazonaws.com/ is denied.
If I use boto3 directly without Celery, I'm able to connect to the queue and retrieve data without this error. I don't know why Celery/Kombu try to list queues when I specify the predefined_queues configuration, tha is used to avoid these behavior (from docs):
If you want Celery to use a set of predefined queues in AWS, and to never attempt to list SQS queues, nor attempt to create or delete them, pass a map of queue names to URLs using the predefined_queue_urls setting
Source here
Anyone know what happens? How I should modify my code in order to make it work?. Seems that Celery is not using the credentials at all.
The versions I'm using:
celery==4.4.7
boto3==1.14.54
kombu==4.5.0
Thanks!
PS: I created and issue in Github to track if this can be a library error or not...
I solved the problem updating dependencies to the latest versions:
celery==5.0.0
boto3==1.14.54
kombu==5.0.2
pycurl==7.43.0.6
I was able to get celery==4.4.7 and kombu==4.6.11 working by setting the following configuration option:
celery.conf["task_create_missing_queues"] = False
I am running Neo4j 2.2.1 in ubuntu Amazon EC2 instance. When I am trying to connect through python using py2neo-2.0.7, I am getting following error :
py2neo.packages.httpstream.http.SocketError: Operation not permitted
I am able to access the web-interface through http://52.10.**.***:7474/browser/
CODE :-
from py2neo import Graph, watch, Node, Relationship
url_graph_conn = "https://neo4j:password#52.10.**.***:7474/db/data/"
print url_graph_conn
my_conn = Graph(url_graph_conn)
babynames = my_conn.find("BabyName")
for babyname in babynames:
print 2
Error message :-
https://neo4j:password#52.10.**.***:7474/db/data/
Traceback (most recent call last):
File "C:\Users\rharoon002\eclipse_workspace\peace\peace\core\graphconnection.py", line 39, in <module>
for babyname in babynames:
File "C:\Python27\lib\site-packages\py2neo\core.py", line 770, in find
response = self.cypher.post(statement, parameters)
File "C:\Python27\lib\site-packages\py2neo\core.py", line 667, in cypher
metadata = self.resource.metadata
File "C:\Python27\lib\site-packages\py2neo\core.py", line 213, in metadata
self.get()
File "C:\Python27\lib\site-packages\py2neo\core.py", line 258, in get
response = self.__base.get(headers=headers, redirect_limit=redirect_limit, **kwargs)
File "C:\Python27\lib\site-packages\py2neo\packages\httpstream\http.py", line 966, in get
return self.__get_or_head("GET", if_modified_since, headers, redirect_limit, **kwargs)
File "C:\Python27\lib\site-packages\py2neo\packages\httpstream\http.py", line 943, in __get_or_head
return rq.submit(redirect_limit=redirect_limit, **kwargs)
File "C:\Python27\lib\site-packages\py2neo\packages\httpstream\http.py", line 433, in submit
http, rs = submit(self.method, uri, self.body, self.headers)
File "C:\Python27\lib\site-packages\py2neo\packages\httpstream\http.py", line 362, in submit
raise SocketError(code, description, host_port=uri.host_port)
py2neo.packages.httpstream.http.SocketError: Operation not permitted
You are trying to access neo4j via https on the standard port for http (7474):
url_graph_conn = "https://neo4j:password#52.10.**.***:7474/db/data/"
The standard port for a https connection is 7473. Try:
url_graph_conn = "https://neo4j:password#52.10.**.***:7473/db/data/"
And make sure you can access the web interface via https:
https://52.10.**.***:7473/browser/
You can change/see the port settings in your neo4j-server.properties file.
I have a form built via web2py that i need to use to validate and register a machine and store it into a database. The validation is not passing through. I am always getting error that "Houston there seems to be a problem with Machine name or super password". If i run the paramiko script outside the web2py environment it is working fine. Please help.
Form Details:
db.define_table('nsksystem',
Field('nskmachinename', length=128, requires = IS_NOT_EMPTY(error_message='Machine Name cant be empty'), label = T('Machine Name')),
Field('nskpassword', 'password', length=512,readable=False, label=T('Machine Password')),
Field('confirmnskpassword', 'password', length=512,readable=False, label=T('Confirm Machine Password')) )
Controller: default.py( for validating the form insertion before inserting)
def machinevalidate(form):
host = form.vars.nskmachinename
user ='superman'
pwd = form.vars.nskpassword
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
ssh.connect(host, username=user,password=pwd)
return True
except Exception, e:
form.errors.nskpassword = 'Houston there seems to be a problem with Machine name or super password'
finally:
ssh.close()
Controller: default.py (for the insertion into database)
#auth.requires_login()
def machine():
form = SQLFORM(db.nsksystem)
if form.process(onvalidation=machinevalidate).accepted:
response.flash = 'The machine is now registered to the user.'
elif form.errors:
response.flash = 'Form has errors'
else:
response.flash = 'Please register a machine to your ID'
return dict(form=form)
After the suggestion I removed the try/ catch and here is the traceback of error. I have wintypes.py under Lib/ctypes. But still then couldnot figure out why there is an import error.
Traceback (most recent call last):
File "gluon/restricted.py", line 224, in restricted
File "C:/web2py/applications/click/controllers/default.py", line 53, in <module>
File "gluon/globals.py", line 393, in <lambda>
File "gluon/tools.py", line 3444, in f
File "C:/web2py/applications/click/controllers/default.py", line 39, in machine
if form.process(onvalidation=machinevalidate).accepted:
File "gluon/html.py", line 2303, in process
File "gluon/html.py", line 2240, in validate
File "gluon/sqlhtml.py", line 1461, in accepts
File "gluon/html.py", line 2141, in accepts
File "gluon/html.py", line 146, in call_as_list
File "C:/web2py/applications/click/controllers/default.py", line 33, in machinevalidate
ssh.connect(host, username=user,password=pwd)
File "C:\Python27\lib\site-packages\paramiko\client.py", line 307, in connect
look_for_keys, gss_auth, gss_kex, gss_deleg_creds, gss_host)
File "C:\Python27\lib\site-packages\paramiko\client.py", line 456, in _auth
self._agent = Agent()
File "C:\Python27\lib\site-packages\paramiko\agent.py", line 332, in __init__
from . import win_pageant
File "gluon/custom_import.py", line 105, in custom_importer
File "C:\Python27\lib\site-packages\paramiko\win_pageant.py", line 25, in <module>
import ctypes.wintypes
File "gluon/custom_import.py", line 105, in custom_importer
ImportError: No module named wintypes
Got the solution...
I was using the compiled version of web2py. When i used the source code, importing of modules were not a head ache any more. Thank you Andrew Magee for giving a heads up...