What is the difference between connecting to the MongoDb server with the following two lines in the models.py module and then import models.py inside views.py:
from pymongo import MongoClient
db = MongoClient()['name']
versus adding db to request as described here or here?
I just started playing round with Pyramid and MongoDb, I used the first approach and it works well. Then I found out that people use the second approach.
Am I doing something wrong?
There's nothing wrong with what you're doing, but it's less future proof in case your app is going to become complex. The pattern your using is what sometimes is called "using a module as a singleton". The first time your module is imported, the code runs, creating a module level object that can be used from any other code that imports from this module. There's nothing wrong with this, it's a normal python pattern and is the reason you don't see much in the way of singleton boilerplate in python land.
However, in a complex app, it can become useful to control exactly when something happens, regardless of who imports what when. When we create the client at config time as per the docs example, you know that it gets created when the config (server startup) block is running as opposed to whenever any code imports your module, and you know from then on that it's available through your registry, which is accessible everywhere in a Pyramid app through the request object. This is the normal Pyramid best practise: set up all your one-time shared across requests machinery in the server start up code where you create your configurator, and (probably) attach them to the configurator or its registry.
This is the same reason we hook things into request lifecycle callbacks, it allows us to know where and when some piece of per-request code executes, and to make sure that a clean up helper always fires at the end of the request lifecycle. So for DB access, we create the shared machinery in config startup, and at the beginning of a request we create the per-connection code, cleaning up afterwards at the end of the request. For an SQL db, this would mean starting the transaction, and then committing or rolling back at the end.
So it might not matter at all for your app right now, but it's good practise for growing code bases. Most of the Pyramid design decisions were made for complex code situations.
Related
I have a file called db.py with the following code:
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
engine = create_engine('sqlite:///my_db.sqlite')
session = scoped_session(sessionmaker(bind=engine,autoflush=True))
I am trying to import this file in various subprocesses started using a spawn context (potentially important, since various fixes that worked for fork don't seem to work for spawn)
The import statement is something like:
from db import session
and then I use this session ad libitum without worrying about concurrency, assuming SQLite's internal locking mechanism will order transactions as to avoid concurrency error, I don't really care about transaction order.
This seems to result in errors like the following:
sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 139813508335360 and this is thread id 139818279995200.
Mind you, this doesn't directly seem to affect my program, every transaction goes through just fine, but I am still worried about what's causing this.
My understanding was that scoped_session was thread-local, so I could import it however I want without issues. Furthermore, my assumption was that sqlalchemy will always handle the closing of connections and that sqllite will handle ordering (i.e. make a session wait for another seesion to end until it can do any transaction).
Obviously one of these assumptions is wrong, or I am misunderstanding something basic about the mechanism here, but I can't quite figure out what. Any suggestions would be useful.
The problem isn't about thread-local sessions, it's that the original connection object is in a different thread to those sessions. SQLite disables using a connection across different threads by default.
The simplest answer to your question is to turn off sqlite's same thread checking. In SQLAlchemy you can achieve this by specifying it as part of your database URL:
engine = create_engine('sqlite:///my_db.sqlite?check_same_thread=False')
I'm guessing that will do away with the errors, at least.
Depending on what you're doing, this may still be dangerous - if you're ensuring your transactions are serialised (that is, one after the other, never overlapping or simultaneous) then you're probably fine. If you can't guarantee that then you're risking data corruption, in which case you should consider a) using a database backend that can handle concurrent writes, or b) creating an intermediary app or service that solely manages sqlite reads and writes and that your other apps can communicate with. That latter option sounds fun but be warned you may end up reinventing the wheel when you're better off just spinning up a Postgres container or something.
Ever since I read
A untested application is broken
in the flask documentation about testing here
I have been working down my list of things to make for some of my applications.
I currently have a flask web app when I write a new route I just write a requests.get('https://api.github.com/user', auth=('user', 'pass')), post, put, etc to test the route.
Is this a decent alternative? Or should I try and do tests via what flask's documentation says, and if so why?
Fundamentally it's the same concept, you are running functionality tests as they do. However, you have a prerequisite, a live application running somewhere (if I got it right). They create a fake application (aka mock) so you can test it without being live, e.g. you want to run tests in a CI environment.
In my opinion it's a better alternative than a live system. Your current approach consumes more resources on your local machine, since you are required to run the whole system to test something (i.e. at least a DB and the application itself). In their approach they don't, the fake instance does not need to have real data, thus no connection to a DB or any other external dependency.
I suggest you to switch to their testing, in the end you will like it.
When my app runs, I'm very frequently getting issues around the connection pooling (one is "QueuePool limit of size 5 overflow 10 reached", another is "FATAL: remaining connection slots are reserved for non-replication superuser connections").
I have a feeling that it's due to some code not closing connections properly, or other code greedily trying to open new ones when it shouldn't, but I'm using the default SQL Alchemy settings so I assume the pool connection defaults shouldn't be unreasonable. We are using the scoped_session(sessionmaker()) way of creating the session so multiple threads are supported.
So my main question is if there is a tool or way to find out where the connections are going? Short of being able to see as soon as a new one is created (that is not supposed to be created), are there any obvious anti-patterns that might result in this effect?
Pyramid is very un-opinionated and with DB connections, there seem to be two main approaches (equally supported by Pyramid it would seem). In our case, the code base when I started the job used one approach (I'll call it the "globals" approach) and we've agreed to switch to another approach that relies less on globals and more on Pythonic idioms.
About our architecture: the application comprises one repo which houses the Pyramid project and then sources a number of other git modules, each of which had their own connection setup. The "globals" way connects to the database in a very non-ORM fashion, eg.:
(in each repo's __init__ file)
def load_database:
global tables
tables['table_name'] = Table(
'table_name', metadata,
Column('column_name', String),
)
There are related globals that are frequently peppered all over the code:
def function_needing_data(field_value):
global db, tables
select = sqlalchemy.sql.select(
[tables['table_name'].c.data], tables['table_name'].c.name == field_value)
return db.execute(select)
This tables variable is latched onto within each git repo which adds some more tables definitions and somehow the global tables manages to work, providing access to all of the tables.
The approach that we've moved to (although at this time, there are parts of both approaches still in the code) is via a centralised connection, binding all of the metadata to it and then querying the db in an ORM approach:
(model)
class ModelName(MetaDataBase):
__tablename__ = "models_table_name"
... (field values)
(function requiring data)
from models.db import DBSession
from models.model_name import ModelName
def function_needing_data(field_value):
return DBSession.query(ModelName).filter(
ModelName.field_value == field_value).all()
We've largely moved the code over to the latter approach which feels right, but perhaps I'm mistaken in my intentions. I don't know if there is anything inherently good or bad in either approach but could this (one of the approaches) be part of the problem so we keep running out of connections? Is there a telltale sign that I should look out for?
It appears that Pyramid functions best (in terms of handling the connection pool) when you use the Pyramid transaction manager (pyramid_tm). This excellent article by Jon Rosebaugh provides some helpful insight into both how Pyramid apps typically set up their database connections and how they should set them up.
In my case, it was necessary to include the pyramid_tm package and then remove a few occurrences where we were manually committing session changes since pyramid_tm will automatically commit changes if it doesn't see a reason not to.
[Update]
I continued to have connection pooling issues although much fewer of them. After a lot of debugging, I found that the pyramid transaction manager (if you're using it correctly) should not be the issue at all. The issue to the other connection pooling issues I had had to do with scripts that ran via cron jobs. A script will release it's connections when it's finished, but bad code design may result in situations where the same script can be opened up and starts running while the previous one is running (causing them both to run slower, slow enough to have both running while a third instance of the script starts and so on).
This is a more language- and database-agnostic error since it stems from poor job-scripting design but it's worth keeping in mind. In my case, the script had an "&" at the end so that each instance started as a background process, waited 10 seconds, then spawned another, rather than making sure the first job started AND completed, then waited 10 seconds, then started another.
Hope this helps when debugging this very frustrating and thorny issue.
Is it possible to use the python reload command (or similar) on a single module in a standalone cherrypy web app? I have a CherryPy based web application that often is under continual usage. From time to time I'll make an "important" change that only affects one module. I would like to be able to reload just that module immediately, without affecting the rest of the web application. A full restart is, admittedly, fast, however there are still several seconds of downtime that I would prefer to avoid if possible.
Reloading modules is very, very hard to do in a sane way. It leads to the potential of stale objects in your code with impossible-to-interrogate state and subtle bugs. It's not something you want to do.
What real web applications tend to do is to have a server that stays alive in front of their application, such as Apache with mod_proxy, to serve as a reverse proxy. You start your new app server, change your reverse proxy's routing, and only then kill the old app server.
No downtime. No insane, undebuggable code.
I am a PHP programmer learning Python, when ever I get a chance.
I read that Python web Application stay active between requests.
Meaning that data stays in memory and is available between requests, right?
I am wondering how that works.
In php we place a cookie with a unique token, and save data in sessions.
Sessions are arrays, saved on disk or database.
Between requests the session functions, restore the correct session array based on the cookie with the unique token. That means each browser gets it's own unique session, and the session has a preset expiration time. If the user is inactive and the expiration get's triggered then the session gets purged. A new session has to be created when the user comes back.
My understanding is Python doesn't need this, because the application stays active between requests.
Doesn't each request get a unique thread in Python?
How does it distinguish between requests, who the requester is?
Is there a handling method to separate vars between users and application?
Lets say I have a dict saved, is this dict globally available between all requests from any browser, or only to that one browser.
When and how does the memory get cleared. If everything stays in the memory. What if the app is running for a couple years without a restart. There must be some kind of expiration setting or memory handling?
One commenter says it depends on the web app. So I am using Bottle.py to learn.
I would assume the answer would depend on which web application framework you are using within python. Some of them have session management pieces in them that track a user across requests. But if you just had a basic port listener that responded with http, you would have to build any cookie support or session management yourself.
The other big difference is that in php, you have a module installed on the server that the actual http server delegates to in order to generate a response. PHP doesn't handle the routing or actual serving of the responses. Where as python can actually be the server and the resource for generating the response. It depends on how python is installed/accessed on the machine where the server is running. So in that sense you can do whatever you want within a python web application.
If you are interested, you should look at some available python web frameworks.
Edit: I see you mentioned bottle.py, and out of the box, it does not provide authentication and session management because it's a micro framework for fast prototyping and not necessarily suitable for a large scale application (although not impossible, just a lot of work).
Yes and no. If you check out this question you get an idea how it could work for a Django application.
However, the way you state it, it will not work. Defining a dict in one request without passing it somewhere in order to make it accessible in following request will obviously not make it available in further requests. So, yes, you have the options do this but its not the casue out of the box!
I was able to persist an object in Python between requests before using Twisted's web server. I have not tried seeing for myself if it persists across browsers though but I have a feeling it does. Here's a code snippet from the documentation:
Twisted includes an event-driven web server. Here's a sample web application; notice how the resource object persists in memory, rather than being recreated on each request:
from twisted.web import server, resource
from twisted.internet import reactor
class HelloResource(resource.Resource):
isLeaf = True
numberRequests = 0
def render_GET(self, request):
self.numberRequests += 1
request.setHeader("content-type", "text/plain")
return "I am request #" + str(self.numberRequests) + "\n"
reactor.listenTCP(8080, server.Site(HelloResource()))
reactor.run()
First of all you should understand the difference between local and global variables in python, and also how thread local storage works.
This is a (very) short explanation:
global variables are those declared at module scope and are shared by all threads. They live as long as the process is running, unless explicitly removed
local variables are those declared inside a function and instantiated for each call of that function. They are deleted when the function is over unless it is still referenced somewhere else.
thread local stoarage enables defining global variables that are specific to the current thread. The live as tong as the current thread is running, unless explicitly removed.
And now I'll try to answer your original questions (the answers are specific to bottle.py, but it is the most common implementation in python web servers)
Doesn't each request get a unique thread in Python?
Each concurrent will have a separate thread, future requests might reuse the previous threads.
How does it distinguish between requests, who the requester is?
bottle.py uses thread local storage to access the current request
Is there a handling method to separate vars between users and application?
Sounds like you are looking for a session. If so, there is no standard way of doing it, because different implementation have advantages and disadvantages. For example this is a bottle.py middleware for sessions.
Lets say I have a dict saved, is this dict globally available between
all requests from any browser, or only to that one browser. When and
how does the memory get cleared.
If everything stays in the memory. What if the app is running for a
couple years without a restart. There must be some kind of expiration
setting or memory handling?
Exactly, there must be an expiration setting. Since you are using a custom dict you need a timer that checks each entry in the dict for expiration.