python Redis Connections - python

I am using Redis server with python.
My application is multithreaded ( I use 20 - 32 threads per process) and I also
I run the app in different machines.
I have noticed that sometimes Redis cpu usage is 100% and Redis server became unresponsive/slow.
I would like to use per application 1 Connection Pool of 4 connections in total.
So for example, if I run my app in 20 machines at maximum, there should be
20*4 = 80 connections to the redis Server.
POOL = redis.ConnectionPool(max_connections=4, host='192.168.1.1', db=1, port=6379)
R_SERVER = redis.Redis(connection_pool=POOL)
class Worker(Thread):
def __init__(self):
self.start()
def run(self):
while True:
key = R_SERVER.randomkey()
if not key: break
value = R_SERVER.get(key)
def _do_something(self, value):
# do something with value
pass
if __name__ = '__main__':
num_threads = 20
workers = [Worker() for _ in range(num_threads)]
for w in workers:
w.join()
The above code should run the 20 threads that get a connection from the connection pool of max size 4 when a command is executed.
When the connection is released?
According to this code (https://github.com/andymccurdy/redis-py/blob/master/redis/client.py):
#### COMMAND EXECUTION AND PROTOCOL PARSING ####
def execute_command(self, *args, **options):
"Execute a command and return a parsed response"
pool = self.connection_pool
command_name = args[0]
connection = pool.get_connection(command_name, **options)
try:
connection.send_command(*args)
return self.parse_response(connection, command_name, **options)
except ConnectionError:
connection.disconnect()
connection.send_command(*args)
return self.parse_response(connection, command_name, **options)
finally:
pool.release(connection)
After the execution of each command, the connection is released and gets back to the pool
Can someone verify that I have understood the idea correct and the above example code will work as described?
Because when I see the redis connections, there are always more than 4.
EDIT: I just noticed in the code that the function has a return statement before the finally. What is the purpose of finally then?

As Matthew Scragg mentioned, the finally clause is executed at the end of the test. In this particular case it serves to release the connection back to the pool when finished with it instead of leaving it hanging open.
As to the unresponsiveness, look to what your server is doing. What is the memory limit of your Redis instance? How often are you saving to disk? Are you running on a Xen based VM such as an AWS instance? Are you running replication, and if so how many slaves and are they in a good state or are they frequently calling for a full resync of data? Are any of your commands "save"?
You can answer some of these questions by using the command line interface. For example
redis-cli info persistence will tell you information about the process of saving to disk, redis-cli info memory will tell you about your memory consumption.
When obtaining the persistence information you want to specifically look at rdb_last_bgsave_status and rdb_last_bgsave_time_sec. These will tell you if the last save was successful and how long it took. The longer it takes the higher the chances are you are running into resource issues and the higher the chance you will encounter slowdowns which can appear as unresponsiveness.

Final block will always run though there is an return statement before it. You may have a look at redis-py/connection.py , pool.release(connection) only put the connection to available-connections pool, So the connection is still alive.
About redis server cpu usage, your app will always send request and has no breaks or sleep, so it just use more and more cpus , but not memory . and cpu usage has no relation with open file numbers.

Related

Snowflake SQLAlchemy connection pooling is rolling back transaction by default, while the documentation says the default is commit transaction

I am doing a poc where I wanted to implement connection pooling in python. I took help of snowflake-sqlalchemy as my target is snowflake database. I wanted to check the following scenarios.
parallely run multiple stored procedures/sql statements.
If the number of SP/SQL which need to execute in parallel is more then the Pool connections, the code should wait. I am writing a while loop for this purpose.
I have defined a function func1 and it is executing in parallel multiple times to simulate multiple thread processing.
Problem Description:
Everything is working fine. The pool size is 2 and max_overflow is 5. When the code is running, it is running 7 parallel SP's/SQL's instead of 10. This is understood that the pool only can support 7 connections to run in parallel. If the rest has to execute, they will wait till one of the thread execution is complete. On the snowflake side, the session is established, and the query executes and then rollback is being triggered. I am unable to figure out why roll back is triggered. I tried to explicitly issue a commit statement also from my code. It executes commit and then roll back also is triggered. The second problem is, even if I am not closing the connection, after the thread execution is complete, the connection is being closed automatically and the next set of statements are triggered. Ideally, I was expecting that the thread is not going to release the connection since there is no closure of connection.
Summary: I wanted to know two things.
Why is the roll back happening.
Why is the connection getting released after thread execution is complete even if the connection is not returned to pool.
May I please request for help in understanding the execution process. I am stuck and unable to figure out what's happening.
import time
from concurrent.futures import ThreadPoolExecutor, wait
from functools import partial
from sqlalchemy.pool import QueuePool
from snowflake.connector import connect
def get_conn():
return connect(
user='<username>',
password="<password>",
account='<account>',
warehouse='<WH>',
database='<DB>',
schema='<Schema>',
role='<role>'
)
pool = QueuePool(get_conn, max_overflow=5, pool_size=2, timeout=10.0)
def func1():
conn = pool.connect()
print("Connection is created and SP is triggered")
print("Pool size is:", pool.size())
print("Checked OUt connections are:", pool.checkedout())
print("Checked in Connections are:", pool.checkedin())
print("Overflow connections are:", pool.overflow())
print("current status of pool is:", pool.status())
c = conn.cursor()
# c.execute("select current_session()")
c.execute("call SYSTEM$WAIT(60)")
# c.execute("commit")
# conn.close()
func_list = []
for _ in range(0, 10):
func_list.append(partial(func1))
print(func_list)
proc = []
with ThreadPoolExecutor() as executor:
for func in func_list:
print("Calling func")
while (pool.checkedout() == pool.size() + pool._max_overflow):
print("Max connections reached. Waiting for an existing connection to release")
time.sleep(15)
p = executor.submit(func)
proc.append(p)
wait(proc)
pool.dispose()
print("process complete")

How can I provide shared state to my Flask app with multiple workers without depending on additional software?

I want to provide shared state for a Flask app which runs with multiple workers, i. e. multiple processes.
To quote this answer from a similar question on this topic:
You can't use global variables to hold this sort of data. [...] Use a data source outside of Flask to hold global data. A database, memcached, or redis are all appropriate separate storage areas, depending on your needs.
(Source: Are global variables thread safe in flask? How do I share data between requests?)
My question is on that last part regarding suggestions on how to provide the data "outside" of Flask. Currently, my web app is really small and I'd like to avoid requirements or dependencies on other programs. What options do I have if I don't want to run Redis or anything else in the background but provide everything with the Python code of the web app?
If your webserver's worker type is compatible with the multiprocessing module, you can use multiprocessing.managers.BaseManager to provide a shared state for Python objects. A simple wrapper could look like this:
from multiprocessing import Lock
from multiprocessing.managers import AcquirerProxy, BaseManager, DictProxy
def get_shared_state(host, port, key):
shared_dict = {}
shared_lock = Lock()
manager = BaseManager((host, port), key)
manager.register("get_dict", lambda: shared_dict, DictProxy)
manager.register("get_lock", lambda: shared_lock, AcquirerProxy)
try:
manager.get_server()
manager.start()
except OSError: # Address already in use
manager.connect()
return manager.get_dict(), manager.get_lock()
You can assign your data to the shared_dict to make it accessible across processes:
HOST = "127.0.0.1"
PORT = 35791
KEY = b"secret"
shared_dict, shared_lock = get_shared_state(HOST, PORT, KEY)
shared_dict["number"] = 0
shared_dict["text"] = "Hello World"
shared_dict["array"] = numpy.array([1, 2, 3])
However, you should be aware of the following circumstances:
Use shared_lock to protect against race conditions when overwriting values in shared_dict. (See Flask example below.)
There is no data persistence. If you restart the app, or if the main (the first) BaseManager process dies, the shared state is gone.
With this simple implementation of BaseManager, you cannot directly edit nested values in shared_dict. For example, shared_dict["array"][1] = 0 has no effect. You will have to edit a copy and then reassign it to the dictionary key.
Flask example:
The following Flask app uses a global variable to store a counter number:
from flask import Flask
app = Flask(__name__)
number = 0
#app.route("/")
def counter():
global number
number += 1
return str(number)
This works when using only 1 worker gunicorn -w 1 server:app. When using multiple workers gunicorn -w 4 server:app it becomes apparent that number is not a shared state but individual for each worker process.
Instead, with shared_dict, the app looks like this:
from flask import Flask
app = Flask(__name__)
HOST = "127.0.0.1"
PORT = 35791
KEY = b"secret"
shared_dict, shared_lock = get_shared_state(HOST, PORT, KEY)
shared_dict["number"] = 0
#app.route("/")
def counter():
with shared_lock:
shared_dict["number"] += 1
return str(shared_dict["number"])
This works with any number of workers, like gunicorn -w 4 server:app.
your example is a bit magic for me! I'd suggest reusing the magic already in the multiprocessing codebase in the form of a Namespace. I've attempted to make the following code compatible with spawn servers (i.e. MS Windows) but I only have access to Linux machines, so can't test there
start by pulling in dependencies and defining our custom Manager and registering a method to get out a Namespace singleton:
from multiprocessing.managers import BaseManager, Namespace, NamespaceProxy
class SharedState(BaseManager):
_shared_state = Namespace(number=0)
#classmethod
def _get_shared_state(cls):
return cls._shared_state
SharedState.register('state', SharedState._get_shared_state, NamespaceProxy)
this might need to be more complicated if creating the initial state is expensive and hence should only be done when it's needed. note that the OPs version of initialising state during process startup will cause everything to reset if gunicorn starts a new worker process later, e.g. after killing one due to a timeout
next I define a function to get access to this shared state, similar to how the OP does it:
def shared_state(address, authkey):
manager = SharedState(address, authkey)
try:
manager.get_server() # raises if another server started
manager.start()
except OSError:
manager.connect()
return manager.state()
though I'm not sure if I'd recommend doing things like this. when gunicorn starts it spawns lots of processes that all race to run this code and it wouldn't surprise me if this could go wrong sometimes. also if it happens to kill off the server process (because of e.g. a timeout) every other process will start to fail
that said, if we wanted to use this we would do something like:
ss = shared_state('server.sock', b'noauth')
ss.number += 1
this uses Unix domain sockets (passing a string rather than a tuple as an address) to lock this down a bit more.
also note this has the same race conditions as the OP's code: incrementing a number will cause the value to be transferred to the worker's process, which is then incremented, and sent back to the server. I'm not sure what the _lock is supposed to be protecting, but I don't think it'll do much

Python3: RuntimeError: can't start new thread [duplicate]

I have a site that runs with follow configuration:
Django + mod-wsgi + apache
In one of user's request, I send another HTTP request to another service, and solve this by httplib library of python.
But sometimes this service don't get answer too long, and timeout for httplib doesn't work. So I creating thread, in this thread I send request to service, and join it after 20 sec (20 sec - is a timeout of request). This is how it works:
class HttpGetTimeOut(threading.Thread):
def __init__(self,**kwargs):
self.config = kwargs
self.resp_data = None
self.exception = None
super(HttpGetTimeOut,self).__init__()
def run(self):
h = httplib.HTTPSConnection(self.config['server'])
h.connect()
sended_data = self.config['sended_data']
h.putrequest("POST", self.config['path'])
h.putheader("Content-Length", str(len(sended_data)))
h.putheader("Content-Type", 'text/xml; charset="utf-8"')
if 'base_auth' in self.config:
base64string = base64.encodestring('%s:%s' % self.config['base_auth'])[:-1]
h.putheader("Authorization", "Basic %s" % base64string)
h.endheaders()
try:
h.send(sended_data)
self.resp_data = h.getresponse()
except httplib.HTTPException,e:
self.exception = e
except Exception,e:
self.exception = e
something like this...
And use it by this function:
getting = HttpGetTimeOut(**req_config)
getting.start()
getting.join(COOPERATION_TIMEOUT)
if getting.isAlive(): #maybe need some block
getting._Thread__stop()
raise ValueError('Timeout')
else:
if getting.resp_data:
r = getting.resp_data
else:
if getting.exception:
raise ValueError('REquest Exception')
else:
raise ValueError('Undefined exception')
And all works fine, but sometime I start catching this exception:
error: can't start new thread
at the line of starting new thread:
getting.start()
and the next and the final line of traceback is
File "/usr/lib/python2.5/threading.py", line 440, in start
_start_new_thread(self.__bootstrap, ())
And the answer is: What's happen?
Thank's for all, and sorry for my pure English. :)
The "can't start new thread" error almost certainly due to the fact that you have already have too many threads running within your python process, and due to a resource limit of some kind the request to create a new thread is refused.
You should probably look at the number of threads you're creating; the maximum number you will be able to create will be determined by your environment, but it should be in the order of hundreds at least.
It would probably be a good idea to re-think your architecture here; seeing as this is running asynchronously anyhow, perhaps you could use a pool of threads to fetch resources from another site instead of always starting up a thread for every request.
Another improvement to consider is your use of Thread.join and Thread.stop; this would probably be better accomplished by providing a timeout value to the constructor of HTTPSConnection.
You are starting more threads than can be handled by your system. There is a limit to the number of threads that can be active for one process.
Your application is starting threads faster than the threads are running to completion. If you need to start many threads you need to do it in a more controlled manner I would suggest using a thread pool.
I was running on a similar situation, but my process needed a lot of threads running to take care of a lot of connections.
I counted the number of threads with the command:
ps -fLu user | wc -l
It displayed 4098.
I switched to the user and looked to system limits:
sudo -u myuser -s /bin/bash
ulimit -u
Got 4096 as response.
So, I edited /etc/security/limits.d/30-myuser.conf and added the lines:
myuser hard nproc 16384
myuser soft nproc 16384
Restarted the service and now it's running with 7017 threads.
Ps. I have a 32 cores server and I'm handling 18k simultaneous connections with this configuration.
I think the best way in your case is to set socket timeout instead of spawning thread:
h = httplib.HTTPSConnection(self.config['server'],
timeout=self.config['timeout'])
Also you can set global default timeout with socket.setdefaulttimeout() function.
Update: See answers to Is there any way to kill a Thread in Python? question (there are several quite informative) to understand why. Thread.__stop() doesn't terminate thread, but rather set internal flag so that it's considered already stopped.
I completely rewrite code from httplib to pycurl.
c = pycurl.Curl()
c.setopt(pycurl.FOLLOWLOCATION, 1)
c.setopt(pycurl.MAXREDIRS, 5)
c.setopt(pycurl.CONNECTTIMEOUT, CONNECTION_TIMEOUT)
c.setopt(pycurl.TIMEOUT, COOPERATION_TIMEOUT)
c.setopt(pycurl.NOSIGNAL, 1)
c.setopt(pycurl.POST, 1)
c.setopt(pycurl.SSL_VERIFYHOST, 0)
c.setopt(pycurl.SSL_VERIFYPEER, 0)
c.setopt(pycurl.URL, "https://"+server+path)
c.setopt(pycurl.POSTFIELDS,sended_data)
b = StringIO.StringIO()
c.setopt(pycurl.WRITEFUNCTION, b.write)
c.perform()
something like that.
And I testing it now. Thanks all of you for help.
If you are tying to set timeout why don't you use urllib2.
I'm running a python script on my machine only to copy and convert some files from one format to another, I want to maximize the number of running threads to finish as quickly as possible.
Note: It is not a good workaround from an architecture perspective If you aren't using it for a quick script on a specific machine.
In my case, I checked the max number of running threads that my machine can run before I got the error, It was 150
I added this code before starting a new thread. which checks if the max limit of running threads is reached then the app will wait until some of the running threads finish, then it will start new threads
while threading.active_count()>150 :
time.sleep(5)
mythread.start()
If you are using a ThreadPoolExecutor, the problem may be that your max_workers is higher than the threads allowed by your OS.
It seems that the executor keeps the information of the last executed threads in the process table, even if the threads are already done. This means that when your application has been running for a long time, eventually it will register in the process table as many threads as ThreadPoolExecutor.max_workers
As far as I can tell it's not a python problem. Your system somehow cannot create another thread (I had the same problem and couldn't start htop on another cli via ssh).
The answer of Fernando Ulisses dos Santos is really good. I just want to add, that there are other tools limiting the number of processes and memory usage "from the outside". It's pretty common for virtual servers. Starting point is the interface of your vendor or you might have luck finding some information in files like
/proc/user_beancounters

SQLite-connect() blocks due to unrelated connect() on separate process

I want to launch a separate process that connects to a SQLite-db (the ultimate goal is to run a service on that process). This typically works fine. When I however connect to another db-file before launching the process, the connect() command is completely blocking: neither does it finish, nor does it raise an Error.
import sqlite3, multiprocessing, time
def connect(filename):
print 'Creating a file on current process!'
sqlite3.connect(filename).close()
def connect_process(filename):
def process_f():
print 'Gets here...'
conn = sqlite3.connect(filename)
print '...but not here when local process has previously connected to any unrelated sqlite-file!!'
conn.close()
process = multiprocessing.Process(target=process_f)
process.start()
process.join()
if(__name__=='__main__'):
connect_process('my_db_1') # Just to show that it generally works
time.sleep(0.5)
connect('any_file') # Connect to unrelated file
connect_process('my_db_2') # Does not get to the end!!
time.sleep(2)
This returns:
> Gets here...
...but not here when local process has connected to any unrelated sqlite-file!!
Creating a file on current process!
Gets here..
So we would expect another line to be printed at the end ...but not here when...
Remarks:
I know that SQLite cannot candle concurrent access. It should however
work here for 2 reasons: 1) the file I connect to on my local process
is different to the separately created one. 2) the connection
on the former file is long closed by the time the process gets
created.
The only operation I use here is to connect to the DB and
then close the connection immediately (which creates the file if not
existing). I have of course verified that we get the same behavior if we
actually do anything meaningful...
The code is just a minimal working example for what I really want to do. The goal is to test a service
that uses SQLite. Hence in the test-setup, I need to create some mock
SQLite-files… The service is then launched on a separate process in order to test it via the respective client.

Python script with multiple threads works normally only in debug mode

I am currently working with one Python 2.7 script with multiple threads. One of the threads is listening for JSON data in long polling mode and parse it after receiving or go into timeout after some period. I noticed that it works as expected only in debug mode (I use Wing IDE). In case of just normal run it seems like this particular thread of the script hanging after first GET request, before entering the "for" loop. Loop condition doesn't affect the result. At the same time other threads continue to work normally.
I believe this is related to multi-threading. How to properly troubleshoot and fix this issue?
Below I put code of the class responsible for long polling job.
class Listener(threading.Thread):
def __init__(self, router, *args, **kwargs):
self.stop = False
self._cid = kwargs.pop("cid", None)
self._auth = kwargs.pop("auth", None)
self._router = router
self._c = webclient.AAHWebClient()
threading.Thread.__init__(self, *args, **kwargs)
def run(self):
while True:
try:
# Data items that should be routed to the device is retrieved by doing a
# long polling GET request on the "/tunnel" resource. This will block until
# there are data items available, or the request times out
log.info("LISTENER: Waiting for data...")
response = self._c.send_request("GET", self._cid, auth=self._auth)
# A timed out request will not contain any data
if len(response) == 0:
log.info("LISTENER: No data this time")
else:
items = response["resources"]["tunnel"]
undeliverable = []
#print items # - reaching this point, able to return output
for item in items:
# The data items contains the data as a base64 encoded string and the
# external reference ID for the device that should receive it
extId = item["extId"]
data = base64.b64decode(item["data"])
# Try to deliver the data to the device identified by "extId"
if not self._router.route(extId, data):
item["message"] = "Could not be routed"
undeliverable.append(item)
# Data items that for some reason could not be delivered to the device should
# be POST:ed back to the "/tunnel" resource as "undeliverable"
if len(undeliverable) > 0:
log.warning("LISTENER: Sending error report...")
response = self._c.send_request("POST", "/tunnel", body={"undeliverable": undeliverable}, auth=self._auth)
except webclient.RequestError as e:
log.error("LISTENER: ERROR %d - %s", e.status, e.response)
UPD:
class Router:
def route(self, extId, data):
log.info("ROUTER: Received data for %s: %s", extId, repr(data))
# nothing special
return True
If you're using the CPython interpreter you're not actually system threading:
CPython implementation detail: In CPython, due to the Global
Interpreter Lock, only one thread can execute Python code at once
(even though certain performance-oriented libraries might overcome
this limitation). If you want your application to make better use of
the computational resources of multi-core machines, you are advised to
use multiprocessing. However, threading is still an appropriate model
if you want to run multiple I/O-bound tasks simultaneously.
So your process is probably locking while listening on the first request because your are long polling.
Multi-processing might be a better choice. I haven't tried it with long polling but the Twisted framework might also work in your situation.

Categories

Resources