Database still in use after a selenium test in Django - python

I have a Django project in which I'm starting to write Selenium tests. The first one looking like this:
from django.contrib.staticfiles.testing import StaticLiveServerTestCase
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from core.models import User
from example import settings
BACH_EMAIL = "johann.sebastian.bach#classics.com"
PASSWORD = "password"
class TestImportCRMData(StaticLiveServerTestCase):
#classmethod
def setUpClass(cls):
super().setUpClass()
cls.webdriver = webdriver.Chrome()
cls.webdriver.implicitly_wait(10)
#classmethod
def tearDownClass(cls):
cls.webdriver.close()
cls.webdriver.quit()
super().tearDownClass()
def setUp(self):
self.admin = User.objects.create_superuser(email=BACH_EMAIL, password=PASSWORD)
def test_admin_tool(self):
self.webdriver.get(f"http://{settings.ADMIN_HOST}:{self.server_thread.port}/admin")
self.webdriver.find_element_by_id("id_username").send_keys(BACH_EMAIL)
self.webdriver.find_element_by_id("id_password").send_keys(PASSWORD)
self.webdriver.find_element_by_id("id_password").send_keys(Keys.RETURN)
self.webdriver.find_element_by_link_text("Users").click()
When I run it, the test pass but still ends with this error:
Traceback (most recent call last):
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\db\backends\utils.py", line 83, in _execute
return self.cursor.execute(sql)
psycopg2.OperationalError: database "test_example" is being accessed by other users
DETAIL: There is 1 other session using the database.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2018.2.4\helpers\pycharm\django_test_manage.py", line 168, in <module>
utility.execute()
File "C:\Program Files\JetBrains\PyCharm 2018.2.4\helpers\pycharm\django_test_manage.py", line 142, in execute
_create_command().run_from_argv(self.argv)
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\core\management\commands\test.py", line 26, in run_from_argv
super().run_from_argv(argv)
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\core\management\base.py", line 316, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\core\management\base.py", line 353, in execute
output = self.handle(*args, **options)
File "C:\Program Files\JetBrains\PyCharm 2018.2.4\helpers\pycharm\django_test_manage.py", line 104, in handle
failures = TestRunner(test_labels, **options)
File "C:\Program Files\JetBrains\PyCharm 2018.2.4\helpers\pycharm\django_test_runner.py", line 255, in run_tests
extra_tests=extra_tests, **options)
File "C:\Program Files\JetBrains\PyCharm 2018.2.4\helpers\pycharm\django_test_runner.py", line 156, in run_tests
return super(DjangoTeamcityTestRunner, self).run_tests(test_labels, extra_tests, **kwargs)
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\test\runner.py", line 607, in run_tests
self.teardown_databases(old_config)
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\test\runner.py", line 580, in teardown_databases
keepdb=self.keepdb,
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\test\utils.py", line 297, in teardown_databases
connection.creation.destroy_test_db(old_name, verbosity, keepdb)
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\db\backends\base\creation.py", line 257, in destroy_test_db
self._destroy_test_db(test_database_name, verbosity)
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\db\backends\base\creation.py", line 274, in _destroy_test_db
% self.connection.ops.quote_name(test_database_name))
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\db\backends\utils.py", line 68, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\db\backends\utils.py", line 77, in _execute_with_wrappers
return executor(sql, params, many, context)
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\db\backends\utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\db\utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\db\backends\utils.py", line 83, in _execute
return self.cursor.execute(sql)
django.db.utils.OperationalError: database "test_example" is being accessed by other users
DETAIL: There is 1 other session using the database.
The problem of course is that the next run of the tests, the database still exists, so, the tests don't run without confirming deletion of the database.
If I comment out the last line:
self.webdriver.find_element_by_link_text("Users").click()
then I don't get this error. I guess just because the database connection is not established. Sometimes it's 1 other session, sometimes it's up to 4. In one of the cases of 4 sessions, these were the running sessions:
select * from pg_stat_activity where datname = 'test_example';
100123 test_example 29892 16393 pupeno "" ::1 61967 2018-11-15 17:28:19.552431 2018-11-15 17:28:19.562398 2018-11-15 17:28:19.564623 idle SELECT "core_user"."id", "core_user"."password", "core_user"."last_login", "core_user"."is_superuser", "core_user"."email", "core_user"."is_staff", "core_user"."is_active", "core_user"."date_joined" FROM "core_user" WHERE "core_user"."id" = 1
100123 test_example 33028 16393 pupeno "" ::1 61930 2018-11-15 17:28:18.792466 2018-11-15 17:28:18.843383 2018-11-15 17:28:18.851828 idle SELECT "django_admin_log"."id", "django_admin_log"."action_time", "django_admin_log"."user_id", "django_admin_log"."content_type_id", "django_admin_log"."object_id", "django_admin_log"."object_repr", "django_admin_log"."action_flag", "django_admin_log"."change_message", "core_user"."id", "core_user"."password", "core_user"."last_login", "core_user"."is_superuser", "core_user"."email", "core_user"."is_staff", "core_user"."is_active", "core_user"."date_joined", "django_content_type"."id", "django_content_type"."app_label", "django_content_type"."model" FROM "django_admin_log" INNER JOIN "core_user" ON ("django_admin_log"."user_id" = "core_user"."id") LEFT OUTER JOIN "django_content_type" ON ("django_admin_log"."content_type_id" = "django_content_type"."id") WHERE "django_admin_log"."user_id" = 1 ORDER BY "django_admin_log"."action_time" DESC LIMIT 10
100123 test_example 14128 16393 pupeno "" ::1 61988 2018-11-15 17:28:19.767225 2018-11-15 17:28:19.776150 2018-11-15 17:28:19.776479 idle SELECT "core_firm"."id", "core_firm"."name", "core_firm"."host_name" FROM "core_firm" WHERE "core_firm"."id" = 1
100123 test_example 9604 16393 pupeno "" ::1 61960 2018-11-15 17:28:19.469197 2018-11-15 17:28:19.478775 2018-11-15 17:28:19.478788 idle COMMIT
I've been trying to find the minimum reproducible example of this problem, but so far I haven't succeeded.
Any ideas what could be causing this or how to find out more about what the issue could be?

This error message...
django.db.utils.OperationalError: database "test_example" is being accessed by other users
DETAIL: There is 1 other session using the database.
...implies that there is an existing session using the database and the new session can't access the database.
Some more information regarding the Django version, Database type and version along with Selenium, ChromeDriver and Chrome versions would have helped us to debug this issue in a better way.
However, you need to take care of a couple of things from Selenium perspective as follows:
As you are initiating a new session on the next run of the tests you need to remove the line of code cls.webdriver.close() as the next line of code cls.webdriver.quit() will be enough to terminate the existing session. As per the best practices you should invoke the quit() method within the tearDown() {}. Invoking quit() DELETEs the current browsing session through sending "quit" command with {"flags":["eForceQuit"]} and finally sends the GET request on /shutdown EndPoint. Here is an example below :
1503397488598 webdriver::server DEBUG -> DELETE /session/8e457516-3335-4d3b-9140-53fb52aa8b74
1503397488607 geckodriver::marionette TRACE -> 37:[0,4,"quit",{"flags":["eForceQuit"]}]
1503397488821 webdriver::server DEBUG -> GET /shutdown
So on invoking quit() method the Web Browser session and the WebDriver instance gets killed completely. Hence you don't have to incorporate any additional steps which will be an overhead.
You can find a detailed discussion in Selenium : How to stop geckodriver process impacting PC memory, without calling driver.quit()?
The currently build Web Applications are gradually moving towards dynamically rendered HTML DOM so to interact with the elements within the DOM Tree, ImplicitWait is no more that effective and you need to use WebDriverWait instead. At this point it is worth to mention that, mixing up Implicit Wait and Explicit Wait can cause unpredictable wait times
So you need to:
Remove the implicitly_wait(10):
cls.webdriver.implicitly_wait(10)
Induce WebDriverWait while interacting with elements:
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.ID, "id_username"))).send_keys(BACH_EMAIL)
self.webdriver.find_element_by_id("id_password").send_keys(PASSWORD)
self.webdriver.find_element_by_id("id_password").send_keys(Keys.RETURN)
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.LINK_TEXT, "Users"))).click()
Now, as per the discussion Persistent connections not closed by LiveServerTestCase, preventing dropping test databases this issue was observed, reported, discussed within Djangov1.6 and fixed. The main issue was:
Whenever a PostgreSQL connection is marked as persistent (CONN_MAX_AGE = None) and a LiveServerTestCase is executed, the connection from the server thread is never closed, leading to inability to drop the test database.
This is exactly the reason why you see:
select * from pg_stat_activity where datname = 'test_example';
100123 test_example 29892 16393 pupeno "" ::1 61967 2018-11-15 17:28:19.552431 2018-11-15 17:28:19.562398 2018-11-15 17:28:19.564623 idle SELECT "core_user"."id", "core_user"."password", "core_user"."last_login", "core_user"."is_superuser", "core_user"."email", "core_user"."is_staff", "core_user"."is_active", "core_user"."date_joined" FROM "core_user" WHERE "core_user"."id" = 1
100123 test_example 33028 16393 pupeno "" ::1 61930 2018-11-15 17:28:18.792466 2018-11-15 17:28:18.843383 2018-11-15 17:28:18.851828 idle SELECT "django_admin_log"."id", "django_admin_log"."action_time", "django_admin_log"."user_id", "django_admin_log"."content_type_id", "django_admin_log"."object_id", "django_admin_log"."object_repr", "django_admin_log"."action_flag", "django_admin_log"."change_message", "core_user"."id", "core_user"."password", "core_user"."last_login", "core_user"."is_superuser", "core_user"."email", "core_user"."is_staff", "core_user"."is_active", "core_user"."date_joined", "django_content_type"."id", "django_content_type"."app_label", "django_content_type"."model" FROM "django_admin_log" INNER JOIN "core_user" ON ("django_admin_log"."user_id" = "core_user"."id") LEFT OUTER JOIN "django_content_type" ON ("django_admin_log"."content_type_id" = "django_content_type"."id") WHERE "django_admin_log"."user_id" = 1 ORDER BY "django_admin_log"."action_time" DESC LIMIT 10
100123 test_example 14128 16393 pupeno "" ::1 61988 2018-11-15 17:28:19.767225 2018-11-15 17:28:19.776150 2018-11-15 17:28:19.776479 idle SELECT "core_firm"."id", "core_firm"."name", "core_firm"."host_name" FROM "core_firm" WHERE "core_firm"."id" = 1
100123 test_example 9604 16393 pupeno "" ::1 61960 2018-11-15 17:28:19.469197 2018-11-15 17:28:19.478775 2018-11-15 17:28:19.478788 idle COMMIT
It was further observed that, even with CONN_MAX_AGE=None, after LiveServerTestCase.tearDownClass(), querying PostgreSQL's pg_stat_activity shows a lingering connection in state idle (which was the connection created by the previous test in your case). So it was pretty evident that the idle connections are not closed when the thread terminates and the needle of supection was at:
LiveServerThread(threading.Thread) which control the threads for running a live http server while the tests are running:
class LiveServerThread(threading.Thread):
def __init__(self, host, static_handler, connections_override=None):
self.host = host
self.port = None
self.is_ready = threading.Event()
self.error = None
self.static_handler = static_handler
self.connections_override = connections_override
super(LiveServerThread, self).__init__()
def run(self):
"""
Sets up the live server and databases, and then loops over handling
http requests.
"""
if self.connections_override:
# Override this thread's database connections with the ones
# provided by the main thread.
for alias, conn in self.connections_override.items():
connections[alias] = conn
try:
# Create the handler for serving static and media files
handler = self.static_handler(_MediaFilesHandler(WSGIHandler()))
self.httpd = self._create_server(0)
self.port = self.httpd.server_address[1]
self.httpd.set_app(handler)
self.is_ready.set()
self.httpd.serve_forever()
except Exception as e:
self.error = e
self.is_ready.set()
def _create_server(self, port):
return WSGIServer((self.host, port), QuietWSGIRequestHandler, allow_reuse_address=False)
def terminate(self):
if hasattr(self, 'httpd'):
# Stop the WSGI server
self.httpd.shutdown()
self.httpd.server_close()
LiveServerTestCase(TransactionTestCase) which basically does the same as TransactionTestCase but also launches a live http server in a separate thread so that the tests may use another testing framework to be used by Selenium, instead of the built-in dummy client:
class LiveServerTestCase(TransactionTestCase):
host = 'localhost'
static_handler = _StaticFilesHandler
#classproperty
def live_server_url(cls):
return 'http://%s:%s' % (cls.host, cls.server_thread.port)
#classmethod
def setUpClass(cls):
super(LiveServerTestCase, cls).setUpClass()
connections_override = {}
for conn in connections.all():
# If using in-memory sqlite databases, pass the connections to
# the server thread.
if conn.vendor == 'sqlite' and conn.is_in_memory_db(conn.settings_dict['NAME']):
# Explicitly enable thread-shareability for this connection
conn.allow_thread_sharing = True
connections_override[conn.alias] = conn
cls._live_server_modified_settings = modify_settings(
ALLOWED_HOSTS={'append': cls.host},
)
cls._live_server_modified_settings.enable()
cls.server_thread = cls._create_server_thread(connections_override)
cls.server_thread.daemon = True
cls.server_thread.start()
# Wait for the live server to be ready
cls.server_thread.is_ready.wait()
if cls.server_thread.error:
# Clean up behind ourselves, since tearDownClass won't get called in
# case of errors.
cls._tearDownClassInternal()
raise cls.server_thread.error
#classmethod
def _create_server_thread(cls, connections_override):
return LiveServerThread(
cls.host,
cls.static_handler,
connections_override=connections_override,
)
#classmethod
def _tearDownClassInternal(cls):
# There may not be a 'server_thread' attribute if setUpClass() for some
# reasons has raised an exception.
if hasattr(cls, 'server_thread'):
# Terminate the live server's thread
cls.server_thread.terminate()
cls.server_thread.join()
# Restore sqlite in-memory database connections' non-shareability
for conn in connections.all():
if conn.vendor == 'sqlite' and conn.is_in_memory_db(conn.settings_dict['NAME']):
conn.allow_thread_sharing = False
#classmethod
def tearDownClass(cls):
cls._tearDownClassInternal()
cls._live_server_modified_settings.disable()
super(LiveServerTestCase, cls).tearDownClass()
The solution was to close only the non-overriden connections and was incorporated from this pull request / commit. The changes were:
In django/test/testcases.py add:
finally:
connections.close_all()
Add a new file tests/servers/test_liveserverthread.py:
from django.db import DEFAULT_DB_ALIAS, connections
from django.test import LiveServerTestCase, TestCase
class LiveServerThreadTest(TestCase):
def run_live_server_thread(self, connections_override=None):
thread = LiveServerTestCase._create_server_thread(connections_override)
thread.daemon = True
thread.start()
thread.is_ready.wait()
thread.terminate()
def test_closes_connections(self):
conn = connections[DEFAULT_DB_ALIAS]
if conn.vendor == 'sqlite' and conn.is_in_memory_db():
self.skipTest("the sqlite backend's close() method is a no-op when using an in-memory database")
# Pass a connection to the thread to check they are being closed.
connections_override = {DEFAULT_DB_ALIAS: conn}
saved_sharing = conn.allow_thread_sharing
try:
conn.allow_thread_sharing = True
self.assertTrue(conn.is_usable())
self.run_live_server_thread(connections_override)
self.assertFalse(conn.is_usable())
finally:
conn.allow_thread_sharing = saved_sharing
In tests/servers/tests.py remove:
finally:
TestCase.tearDownClass()
In tests/servers/tests.py add:
finally:
if hasattr(TestCase, 'server_thread'):
TestCase.server_thread.terminate()
Solution
Steps:
Ensure you have updated to the latest released version of Django package.
Ensure you are using the latest released version of selenium v3.141.0.
Ensure you are using the latest released version of Chrome v76 and ChromeDriver 76.0.
Outro
You can find a similar discussion in django.db.utils.IntegrityError: FOREIGN KEY constraint failed while executing LiveServerTestCases through Selenium and Python Django

There has been an issue reported long back for the same (bug #22424).
One thing you need to make sure is that CONN_MAX_AGE is set to 0 and not None
Also, you can use something like below in your teardown
from django.db import connections
from django.db.utils import OperationalError
#classmethod
def tearDownClass(cls):
# Workaround for https://code.djangoproject.com/ticket/22414
# Persistent connections not closed by LiveServerTestCase, preventing dropping test databases
# https://github.com/cjerdonek/django/commit/b07fbca02688a0f8eb159f0dde132e7498aa40cc
def close_sessions(conn):
close_sessions_query = """
SELECT pg_terminate_backend(pg_stat_activity.pid) FROM pg_stat_activity WHERE
datname = current_database() AND
pid <> pg_backend_pid();
"""
with conn.cursor() as cursor:
try:
cursor.execute(close_sessions_query)
except OperationalError:
# We get kicked out after closing.
pass
for alias in connections:
connections[alias].close()
close_sessions(connections[alias])
print "Forcefully closed database connections."
Above code is from below URL
https://github.com/cga-harvard/Hypermap-Registry/blob/cd4efad61f18194ddab2c662aa431aa21dec03f4/hypermap/tests/test_csw.py

Check active connections to know what causes the problem
select * from pg_stat_activity;
You can disable extensions:
#classmethod
def setUpClass(cls):
super().setUpClass()
options = webdriver.chrome.options.Options()
options.add_argument("--disable-extensions")
cls.webdriver = webdriver.Chrome(chrome_options=options)
cls.webdriver.implicitly_wait(10)
Then in teardown:
#classmethod
def tearDownClass(cls):
cls.webdriver.stop_client()
cls.webdriver.quit()
connections.close_all()
super().tearDownClass()

Related

How to set a query timeout in sqlalchemy using Oracle database?

I want to create a query timeout in sqlalchemy. I have an oracle database.
I have tried following code:
import sqlalchemy
engine = sqlalchemy.create_engine('oracle://db', connect_args={'querytimeout': 10})
I got following error:
TypeError: 'querytimeout' is an invalid keyword argument for this function
I would like a solution looking like:
connection.execute('query').set_timeout(10)
Maybe it is possible to set timeout in sql query? I found how to do it in pl/sql, but i need just sql.
How could i set a query timeout?
The only way how you can set connection timeout for the Oracle engine from the Sqlalchemy is create and configure the sqlnet.ora
Linux
Create file sqlnet.ora in folder
/opt/oracle/instantclient_19_9/network/admin
Windows
For windows please create such folder as \network\admin
C:\oracle\instantclient_19_9\network\admin
Example sqlnet.ora file
SQLNET.INBOUND.CONNECT_TIMEOUT = 120
SQLNET.SEND_TIMEOUT = 120
SQLNET.RECV_TIMEOUT = 120
More parameters you can find here https://docs.oracle.com/cd/E11882_01/network.112/e10835/sqlnet.htm
The way to do it in Oracle is via resource manager. Have a look here
timeout decorator
Get your session handle as you normally would. (Notice that the session has not actually connected yet.) Then, test the session in a function that is decorated with wrapt_timeout_decorator.timeout.
#!/usr/bin/env python3
from time import time
from cx_Oracle import makedsn
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.sql import text
from wrapt_timeout_decorator import timeout
class ConnectionTimedOut(Exception):
pass
class Blog:
def __init__(self):
self.port = None
def connect(self, connection_timeout):
#timeout(connection_timeout, timeout_exception=ConnectionTimedOut)
def test_session(session):
session.execute(text('select dummy from dual'))
session = sessionmaker(bind=self.engine())()
test_session(session)
return session
def engine(self):
return create_engine(
self.connection_string(),
max_identifier_length=128
)
def connection_string(self):
driver = 'oracle'
username = 'USR'
password = 'solarwinds123'
return '%s://%s:%s#%s' % (
driver,
username,
password,
self.dsn()
)
def dsn(self):
host = 'hn.com'
dbname = 'ORCL'
print('port: %s expected: %s' % (
self.port,
'success' if self.port == 1530 else 'timeout'
))
return makedsn(host, self.port, dbname)
def run(self):
self.port = 1530
session = self.connect(connection_timeout=4)
for r in session.execute(text('select status from v$instance')):
print(r.status)
self.port = 1520
session = self.connect(connection_timeout=4)
for r in session.execute(text('select status from v$instance')):
print(r.status)
if __name__ == '__main__':
Blog().run()
In this example, the network is firewalled with port 1530 open. Port 1520 is blocked and leads to a TCP connection timeout. Output:
port: 1530 expected: success
OPEN
port: 1520 expected: timeout
Traceback (most recent call last):
File "./blog.py", line 68, in <module>
Blog().run()
File "./blog.py", line 62, in run
session = self.connect(connection_timeout=4)
File "./blog.py", line 27, in connect
test_session(session)
File "/home/exagriddba/lib/python3.8/site-packages/wrapt_timeout_decorator/wrapt_timeout_decorator.py", line 123, in wrapper
return wrapped_with_timeout(wrap_helper)
File "/home/exagriddba/lib/python3.8/site-packages/wrapt_timeout_decorator/wrapt_timeout_decorator.py", line 131, in wrapped_with_timeout
return wrapped_with_timeout_process(wrap_helper)
File "/home/exagriddba/lib/python3.8/site-packages/wrapt_timeout_decorator/wrapt_timeout_decorator.py", line 145, in wrapped_with_timeout_process
return timeout_wrapper()
File "/home/exagriddba/lib/python3.8/site-packages/wrapt_timeout_decorator/wrap_function_multiprocess.py", line 43, in __call__
self.cancel()
File "/home/exagriddba/lib/python3.8/site-packages/wrapt_timeout_decorator/wrap_function_multiprocess.py", line 51, in cancel
raise_exception(self.wrap_helper.timeout_exception, self.wrap_helper.exception_message)
File "/home/exagriddba/lib/python3.8/site-packages/wrapt_timeout_decorator/wrap_helper.py", line 178, in raise_exception
raise exception(exception_message)
__main__.ConnectionTimedOut: Function test_session timed out after 4.0 seconds
Caution
Do not decorate the function that calls sessionmaker, or you will get:
_pickle.PicklingError: Can't pickle <class 'sqlalchemy.orm.session.Session'>: it's not the same object as sqlalchemy.orm.session.Session
SCAN
This implementation is a "connection timeout" without regard to underlying cause. The client could time out before trying all available SCAN listeners.

PyHive with Kerberos throws Authentication error after few calls

I am trying to connect to Hive using Python (PyHive Lib) to read some data and then I further connects it to hive Flask to show in Dashboard.
It all works fine for few calls to hive, however soon after that I am getting following error.
Traceback (most recent call last):
File "libs/hive.py", line 63, in <module>
cur = h.connect().cursor()
File "libs/hive.py", line 45, in connect
kerberos_service_name='hive')
File "/home1/igns/git/emsr/.venv/lib/python2.7/site-packages/pyhive/hive.py", line 94, in connect
return Connection(*args, **kwargs)
File "/home1/igns/git/emsr/.venv/lib/python2.7/site-packages/pyhive/hive.py", line 192, in __init__
self._transport.open()
File "/home1/igns/git/emsr/.venv/lib/python2.7/site-packages/thrift_sasl/__init__.py", line 79, in open
message=("Could not start SASL: %s" % self.sasl.getError()))
thrift.transport.TTransport.TTransportException: Could not start SASL: Error in sasl_client_start (-1) SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (No Kerberos credentials available (default cache: FILE:/tmp/krb5cc_cdc995595290_51CD7j))
Following is my code
from pyhive import hive
class Hive(object):
def connect(self):
return hive.connect(host='hive.hadoop-prod.abc.com',
port=10000,
database='temp',
username='gaurang.shah',
auth='KERBEROS',
kerberos_service_name='hive')
if __name__ == '__main__':
h = Hive()
cur = h.connect().cursor()
cur.execute("select * from temp.migration limit 1")
res = cur.fetchall()
print res
Calling Script
source .venv/bin/activate
for i in {1..50}
do
python get_hive_data.py
sleep 300
done
Observation
When it's working I can see hive in service principal when I do klist however, I don't when I see above error message.
Klist when it's working
Ticket cache: FILE:/tmp/krb5cc_cdc995595290_XyMnhu
Default principal: gaurang.shah#ABC.COM
Valid starting Expires Service principal
12/04/2018 14:37:28 12/05/2018 00:37:28 krbtgt/ABC.COM#ABC.COM
renew until 12/05/2018 14:37:24
12/04/2018 14:39:06 12/05/2018 00:37:28 hive/hive_server.ABC.COM#ABC.COM
renew until 12/05/2018 14:37:24
Klist when it's not working
Ticket cache: FILE:/tmp/krb5cc_cdc995595290_XyMnhu
Default principal: gaurang.shah#ABC.COM
Valid starting Expires Service principal
12/04/2018 14:37:28 12/05/2018 00:37:28 krbtgt/ABC.COM#ABC.COM
renew until 12/05/2018 14:37:24
Update:
So I don't think it's after certain call however, I think it's after certain time. ( I think one hour). I changed the sleep time to 3600 sec and just after first call I started getting error.
This is weird as, ticket for hive/hive_server.ABC.COM#ABC.COM was valid for 7 days
I know this is an old post. But if you make a new connection every time you are doing a call, you should resolve the issue.
from pyhive import hive
class Hive(object):
def connect(self):
return hive.connect(host='hive.hadoop-prod.abc.com',
port=10000,
database='temp',
username='gaurang.shah',
auth='KERBEROS',
kerberos_service_name='hive')
if __name__ == '__main__':
def newConnect(query):
h = Hive()
cur = h.connect().cursor()
cur.execute(query)
res = cur.fetchall()
return res
myConnectionAndResults = newConnect("select * from temp.migration limit 1")
print myConnectionAndResults

Python cassandra-driver OperationTimeOut on every query in Celery task

I have a problem with every insert query (little query) which is executed in celery tasks asynchronously.
In sync mode when i do insert all done great, but when it executed in apply_async() i get this:
OperationTimedOut('errors=errors=errors={}, last_host=***.***.*.***, last_host=None, last_host=None',)
Traceback:
Traceback (most recent call last):
File "/var/nfs_www/***/env_v0/local/lib/python2.7/site-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/var/nfs_www/***/env_v0/local/lib/python2.7/site-packages/celery/app/trace.py", line 437, in __protected_call__
return self.run(*args, **kwargs)
File "/var/nfs_www/***/www_v1/app/mods/news_feed/tasks.py", line 26, in send_new_comment_reply_notifications
send_new_comment_reply_notifications_method(comment_id)
File "/var/nfs_www/***www_v1/app/mods/news_feed/methods.py", line 83, in send_new_comment_reply_notifications
comment_type='comment_reply'
File "/var/nfs_www/***/www_v1/app/mods/news_feed/models/storage.py", line 129, in add
CommentsFeed(**kwargs).save()
File "/var/nfs_www/***/env_v0/local/lib/python2.7/site-packages/cqlengine/models.py", line 531, in save
consistency=self.__consistency__).save()
File "/var/nfs_www/***/env_v0/local/lib/python2.7/site-packages/cqlengine/query.py", line 907, in save
self._execute(insert)
File "/var/nfs_www/***/env_v0/local/lib/python2.7/site-packages/cqlengine/query.py", line 786, in _execute
tmp = execute(q, consistency_level=self._consistency)
File "/var/nfs_www/***/env_v0/local/lib/python2.7/site-packages/cqlengine/connection.py", line 95, in execute
result = session.execute(query, params)
File "/var/nfs_www/***/env_v0/local/lib/python2.7/site-packages/cassandra/cluster.py", line 1103, in execute
result = future.result(timeout)
File "/var/nfs_www/***/env_v0/local/lib/python2.7/site-packages/cassandra/cluster.py", line 2475, in result
raise OperationTimedOut(errors=self._errors, last_host=self._current_host)
OperationTimedOut: errors={}, last_host=***.***.*.***
Does anyone have ideas about problem?
I found this When cassandra-driver was executing the query, cassandra-driver returned error OperationTimedOut, but my query is very little and problem only in celery tasks.
UPDATE:
I made a test task and it raises this error too.
#celery.task()
def test_task_with_cassandra():
from app import cassandra_session
cassandra_session.execute('use news_feed')
return 'Done'
UPDATE 2:
Made this:
#celery.task()
def test_task_with_cassandra():
from cqlengine import connection
connection.setup(app.config['CASSANDRA_SERVERS'], port=app.config['CASSANDRA_PORT'],
default_keyspace='test_keyspace')
from .models import Feed
Feed.objects.count()
return 'Done'
Got this:
NoHostAvailable('Unable to connect to any servers', {'***.***.*.***': OperationTimedOut('errors=errors=Timed out creating connection, last_host=None, last_host=None',)})
From shell i can connect to it
UPDATE 3:
From deleted thread on github issue (found this in my emails): (this worked for me too)
Here's how, in substance, I plug CQLengine to Celery:
from celery import Celery
from celery.signals import worker_process_init, beat_init
from cqlengine import connection
from cqlengine.connection import (
cluster as cql_cluster, session as cql_session)
def cassandra_init():
""" Initialize a clean Cassandra connection. """
if cql_cluster is not None:
cql_cluster.shutdown()
if cql_session is not None:
cql_session.shutdown()
connection.setup()
# Initialize worker context for both standard and periodic tasks.
worker_process_init.connect(cassandra_init)
beat_init.connect(cassandra_init)
app = Celery()
This is crude, but works. Should we add this snippet in the FAQ ?
I had a similar issue. It seemed to be related to sharing the Cassandra session between tasks. I solved it by creating a session per thread. Make sure you call get_session() from you tasks and then do this:
thread_local = threading.local()
def get_session():
if hasattr(thread_local, "cassandra_session"):
return thread_local.cassandra_session
cluster = Cluster(settings.CASSANDRA_HOSTS)
session = cluster.connect(settings.CASSANDRA_KEYSPACE)
thread_local.cassandra_session = session
return session
Inspired by Ron's answer, I come up with the following code to put in tasks.py:
import threading
from django.conf import settings
from cassandra.cluster import Cluster
from celery.signals import worker_process_init,worker_process_shutdown
thread_local = threading.local()
#worker_process_init.connect
def open_cassandra_session(*args, **kwargs):
cluster = Cluster([settings.DATABASES["cassandra"]["HOST"],], protocol_version=3)
session = cluster.connect(settings.DATABASES["cassandra"]["NAME"])
thread_local.cassandra_session = session
#worker_process_shutdown.connect
def close_cassandra_session(*args,**kwargs):
session = thread_local.cassandra_session
session.shutdown()
thread_local.cassandra_session = None
This neat solution will automatically open/close cassandra sessions when celery worker process starts and stops.
Side note: protocol_version=3, because Cassandra 2.1 only supports protocol versions 3 and lower.
The other answers didn't work for me, but the question's 'update 3' did. Here's what I ended up with (small updates to the suggestion within the question):
from celery.signals import worker_process_init
from cassandra.cqlengine import connection
from cassandra.cqlengine.connection import (
cluster as cql_cluster, session as cql_session)
def cassandra_init(*args, **kwargs):
""" Initialize a clean Cassandra connection. """
if cql_cluster is not None:
cql_cluster.shutdown()
if cql_session is not None:
cql_session.shutdown()
connection.setup(settings.DATABASES["cassandra"]["HOST"].split(','), settings.DATABASES["cassandra"]["NAME"])
# Initialize worker context (only standard tasks)
worker_process_init.connect(cassandra_init)
Using django-cassandra-engine the following resolved the issue for me:
db_connection = connections['cassandra']
#worker_process_init.connect
def connect_db(**_):
db_connection.reconnect()
#worker_shutdown.connect
def disconnect(**_):
db_connection.connection.close_all()
look at here

Disconnect a WiFi access point using NetworkManager and Python

I’m building an Python application that has to connect and disconnect from Wifi on linux box. I’m using NetworkManager layer, through the nice networkmanager lib found in cnetworkmanager (a python CLI for NetworkManager http://vidner.net/martin/software/cnetworkmanager/ thanx to Martin Vidner), in a daemon (named stationd).
This daemon runs a gobject.MainLoop. Once a timeout_add_seconds awake (triggers by a action of user in GUI), I have to disconnect current running Wifi and connect to a new one:
from dbus.mainloop.glib import DBusGMainLoop
DBusGMainLoop(set_as_default=True)
from networkmanager import NetworkManager
import networkmanager.applet.settings as settings
from networkmanager.applet import USER_SERVICE
from networkmanager.applet.service import NetworkManagerUserSettings, NetworkManagerSettings
import time
import memcache
import gobject
loop = gobject.MainLoop()
nm = NetworkManager()
dummy_handler = lambda *args: None
cache = memcache.Client(['127.0.0.1:11211',] )
def get_device(dev_spec, hint):
candidates = []
devs = NetworkManager().GetDevices()
for dev in devs:
if dev._settings_type() == hint:
candidates.append(dev)
if len(candidates) == 1:
return candidates[0]
for dev in devs:
if dev["Interface"] == dev_spec:
return dev
print "Device '%s' not found" % dev_spec
return None
def kill_allconnections():
connections=nm['ActiveConnections']
for c in connections:
print c.object_path
nm.DeactivateConnection(c.object_path)
class Wifi(object):
def connect(self, ssid, security="open", password=None):
"Connects to given Wifi network"
c=None # connection settings
us = NetworkManagerUserSettings([])
if security=="open":
c = settings.WiFi(ssid)
elif security=="wep":
c = settings.Wep(ssid, password)
elif security=="wpa":
c = settings.WpaPsk(ssid, password)
else:
raise AttributeError("invalid security model '%s'"%security)
svc = USER_SERVICE
svc_conn = us.addCon(c.conmap)
hint = svc_conn.settings["connection"]["type"]
dev = get_device("", hint)
appath = "/"
nm.ActivateConnection(svc, svc_conn, dev, appath, reply_handler=dummy_handler, error_handler=dummy_handler)
def change_network_settings():
key="station:network:change"
change=cache.get(key)
if change is not None:
print "DISCONNECT"
kill_allconnections()
print "CHANGE SETTINGS"
wifi=cache.get(key+':wifi')
if wifi is not None:
ssid=cache.get(key+':wifi:ssid')
security=cache.get(key+':wifi:security')
password=cache.get(key+':wifi:password')
print "SWITCHING TO %s"%ssid
Wifi().connect(ssid, security, password)
cache.delete(key)
return True
def mainloop():
gobject.timeout_add_seconds(1, change_network_settings)
try:
loop.run()
except KeyboardInterrupt:
loop.quit()
if __name__=="__main__":
mainloop()
This runs perfectly for a first connection (read : the box is not connected, daemon is ran and box connects flawlessly to Wifi). Issue is when I try to connect to another Wifi : kill_allconnections() is ran silently, and connect method raises an exception on nm.ActivateConnection:
Traceback (most recent call last):
File "stationd.py", line 40, in change_network_settings
Wifi().connect(ssid, security, password)
File "/home/biopredictive/station/lib/network.py", line 88, in connect
us = NetworkManagerUserSettings([])
File "/home/biopredictive/station/lib/networkmanager/applet/service/__init__.py", line 71, in __init__
super(NetworkManagerUserSettings, self).__init__(conmaps, USER_SERVICE)
File "/home/biopredictive/station/lib/networkmanager/applet/service/__init__.py", line 33, in __init__
dbus.service.Object.__init__(self, bus, opath, bus_name)
File "/usr/lib/pymodules/python2.6/dbus/service.py", line 480, in __init__
self.add_to_connection(conn, object_path)
File "/usr/lib/pymodules/python2.6/dbus/service.py", line 571, in add_to_connection
self._fallback)
KeyError: "Can't register the object-path handler for '/org/freedesktop/NetworkManagerSettings': there is already a handler"
It looks like my former connection didn’t release all its resources ?
I’m very new to gobject/dbus programming. Would you please help ?
I have answered on the D-Bus mailing list. Quoting here for the archive:
class Wifi(...):
def connect(...):
...
us = NetworkManagerUserSettings([])
NetworkManagerUserSettings is a server, not a client. (It's a design
quirk of NM which was eliminated in NM 0.9)
You should create only one for the daemon, not for each conection
attempt.

MySQL server has gone away - Disconnect handling via checkout event handler doesn't work

Update 3/4:
I've done some testing and proved that using checkout event handler to check disconnects works with Elixir. Beginning to think my problem has something to do with calling session.commit() from a subprocess? Update: I just disproved myself by calling session.commit() in a subprocess, updated example below. I'm using the multiprocessing module to create the subprocess.
Here's the code that shows how it should work (without even using pool_recycle!):
from sqlalchemy import exc
from sqlalchemy import event
from sqlalchemy.pool import Pool
from elixir import *
import multiprocessing as mp
class SubProcess(mp.Process):
def run(self):
a3 = TestModel(name="monkey")
session.commit()
class TestModel(Entity):
name = Field(String(255))
#event.listens_for(Pool, "checkout")
def ping_connection(dbapi_connection, connection_record, connection_proxy):
cursor = dbapi_connection.cursor()
try:
cursor.execute("SELECT 1")
except:
# optional - dispose the whole pool
# instead of invalidating one at a time
# connection_proxy._pool.dispose()
# raise DisconnectionError - pool will try
# connecting again up to three times before raising.
raise exc.DisconnectionError()
cursor.close()
from sqlalchemy import create_engine
metadata.bind = create_engine("mysql://foo:bar#localhost/some_db", echo_pool=True)
setup_all(True)
subP = SubProcess()
a1 = TestModel(name='foo')
session.commit()
# pool size is now three.
print "Restart the server"
raw_input()
subP.start()
#a2 = TestModel(name='bar')
#session.commit()
Update 2:
I'm forced to find another solution as post 1.2.2 versions of MySQL-python drops support for the reconnect param. Anyone got a solution? :\
Update 1 (old-solution, doesn't work for MySQL-python versions > 1.2.2):
Found a solution: passing connect_args={'reconnect':True} to the create_engine call fixes the problem, automagically reconnects. Don't even seem to need the checkout event handler.
So, in the example from the question:
metadata.bind = create_engine("mysql://foo:bar#localhost/db_name", pool_size=100, pool_recycle=3600, connect_args={'reconnect':True})
Original question:
Done quite a bit of Googling for this problem and haven't seem to found a solution specific to Elixir - I'm trying to use the "Disconnect Handling - Pessimistic" example from the SQLAlchemy docs to handle MySQL disconnects. However, when I test this (by restarting the MySQL server), the "MySQL server has gone away" error is raised before before my checkout event handler.
Here's the code I use to initialize elixir:
##### Initialize elixir/SQLAlchemy
# Disconnect handling
from sqlalchemy import exc
from sqlalchemy import event
from sqlalchemy.pool import Pool
#event.listens_for(Pool, "checkout")
def ping_connection(dbapi_connection, connection_record, connection_proxy):
logging.debug("***********ping_connection**************")
cursor = dbapi_connection.cursor()
try:
cursor.execute("SELECT 1")
except:
logging.debug("######## DISCONNECTION ERROR #########")
# optional - dispose the whole pool
# instead of invalidating one at a time
# connection_proxy._pool.dispose()
# raise DisconnectionError - pool will try
# connecting again up to three times before raising.
raise exc.DisconnectionError()
cursor.close()
metadata.bind= create_engine("mysql://foo:bar#localhost/db_name", pool_size=100, pool_recycle=3600)
setup_all()
I create elixir entity objects and save them with session.commit(), during which I see the "ping_connection" message generated from the event defined above. However, when I restart the mysql server and test it again, it fails with the mysql server has gone away message just before the ping connection event.
Here's the stack trace starting from the relevant lines:
File "/usr/local/lib/python2.6/dist-packages/elixir/entity.py", line 1135, in get_by
return cls.query.filter_by(*args, **kwargs).first()
File "/usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/query.py", line 1963, in first
ret = list(self[0:1])
File "/usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/query.py", line 1857, in __getitem__
return list(res)
File "/usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/query.py", line 2032, in __iter__
return self._execute_and_instances(context)
File "/usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/query.py", line 2047, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "/usr/local/lib/python2.6/dist-packages/sqlalchemy/engine/base.py", line 1399, in execute
params)
File "/usr/local/lib/python2.6/dist-packages/sqlalchemy/engine/base.py", line 1532, in _execute_clauseelement
compiled_sql, distilled_params
File "/usr/local/lib/python2.6/dist-packages/sqlalchemy/engine/base.py", line 1640, in _execute_context
context)
File "/usr/local/lib/python2.6/dist-packages/sqlalchemy/engine/base.py", line 1633, in _execute_context
context)
File "/usr/local/lib/python2.6/dist-packages/sqlalchemy/engine/default.py", line 330, in do_execute
cursor.execute(statement, parameters)
File "/usr/lib/pymodules/python2.6/MySQLdb/cursors.py", line 166, in execute
self.errorhandler(self, exc, value)
File "/usr/lib/pymodules/python2.6/MySQLdb/connections.py", line 35, in defaulterrorhandler
raise errorclass, errorvalue
OperationalError: (OperationalError) (2006, 'MySQL server has gone away')
The final workaround was calling session.remove() in the start of methods before manipulating and loading elixir entities. What this does is it will return the connection to the pool, so that when it's used again, the pool's checkout event will be fired, and our handler will detect the disconnection. From SQLAlchemy docs:
It’s not strictly necessary to remove the session at the end of the request - other options include calling Session.close(), Session.rollback(), Session.commit() at the end so that the existing session returns its connections to the pool and removes any existing transactional context. Doing nothing is an option too, if individual controller methods take responsibility for ensuring that no transactions remain open after a request ends.
Quite an important little piece of information I wish it were mentioned in the elixir docs. But then I guess it assumes prior knowledge with SQLAlchemy?
the actual problem is sqlalchemy giving you the same session every time you call the sessionmaker factory. Due to this it can happen that a later query is performed with a much earlier opened session as long as you did not call session.remove() on the session. Having to remember calling remove() every time you request a session however is no fun and sqlalchemy provides a much simpler thing: contexual "scoped" sessions.
To create a scoped session simply wrap your sessionmaker:
from sqlalchemy.orm import scoped_session, sessionmaker
Session = scoped_session(sessionmaker())
This way you get a contexual bound session every time you call the factory, meaning sqlalchemy calls the session.remove() for you as soon as the calling function exits. See here: sqlalchemy - lifespan of a contextual session
Are you using the same session for both (before and after mysqld restart) operations? If so, the "checkout" event occurs only when new transaction is started. When you call commit() the new transaction is started (unless you use autocommit mode) and connection is checked out. So you restart mysqld after checkout.
The simple hack with commit() or rollback() call just before the second operation (and after restarting mysqld) should solve your problem. Otherwise consider using new fresh session each time you wait long time after previous commit.
I'm not sure if this is the same problem that I had, but here goes:
When I encountered MySQL server has gone away, I solved it using create_engine(..., pool_recycle=3600), see http://www.sqlalchemy.org/docs/dialects/mysql.html#connection-timeouts

Categories

Resources