I want to create a simple app with Falcon that is able to handle small sqlite database with hostname: ip records. I want to be able to replace rows in sqlite, so I decide that hostname is unique field. I have a model.py:
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import create_engine, Column, Integer, String
Base = declarative_base()
DB_URI = 'sqlite:///clients.db'
class Client(Base):
__tablename__ = 'clients'
id = Column(Integer, primary_key=True)
hostname = Column(String(50), unique=True)
ip = Column(String(50))
My simple resources.py:
from falcon_autocrud.resource import CollectionResource, SingleResource
from models import *
class ClientCollectionResource(CollectionResource):
model = Client
methods = ['GET', 'POST']
When I make a POST-request with updated information about hostname:ip i get an Unique constraint violated error:
req = requests.post('http://localhost:8000/clients',
headers={'Content-Type': 'application/json'},
data=json.dumps({'hostname': 'laptop1', 'ip': '192.168.0.33'}));
req.content
>> b'{"title": "Conflict", "description": "Unique constraint violated"}'
Is there any way to replace existing records using sqlalchemy? Or maybe I was wrong choosing sqlite for these purposes?
When building a REST-ful API you should not use POST to update existing resources, POST to a resource should only ever create new resources. falcon-autocrud is doing the right thing here.
Instead, use PUT on the individual resource (the SingleResource resource registered for .../clients/<identifier>) to alter existing resources.
If you use hostname in your SingleResource definition then falcon-autocrud should automatically use that column as the identifier (assuming that your SingleResource subclass is called ClientResource):
app.add_route('/clients/{hostname}', ClientResource(db_engine))
at which point you can PUT the new ip value directly with:
requests.put('http://localhost:8000/clients/laptop1', json={'ip': '192.168.0.33'})
(Note that requests supports JSON requests directly; the json= keyword argument is encoded to JSON for you, and the Content-Type header is set for you automatically when you use it).
You may want to limit what fields are returned for your Client objects. With a unique hostname you wouldn't want to confuse clients by also sending the primary key column. I'd limit the response fields by setting the response_fields attribute on your resource classes:
class ClientCollectionResource(CollectionResource):
model = Client
response_fields = ['hostname', 'ip']
methods = ['GET', 'POST']
class ClientResource(SingleResource):
model = Client
response_fields = ['hostname', 'ip']
I see that falcon-autocrud doesn't yet support PATCH requests on the collection that alter existing resources (only "op": "add" is supported), otherwise that'd be another route to alter existing entries too.
Related
I'm developing an API with Flask and I cannot retrieve queries from a MySQL database I've connected with flask-sqlalchemy (not sqlalchemy alone). This is a pre-existing database downloaded from my client's PHPMyAdmin, so I haven't ran db.create_all(): I simply created the connection string in config.py, then instantiated db = SQLAchemy() and initialized it (db.init_app(app)) in my factory function (i'm using the factory pattern together with blueprints).
I've already checked and my computer is running the mysql process, the login credentials provided are correct and the database exists in my computer. I'm using MariaDB because I run Manjaro Linux.
This is the connection string, located in config.py:
SQLALCHEMY_DATABASE_URI = os.environ.get('DATABASE_URL') or "mariadb+mariadbconnector://dev:dev#localhost/desayunos56"
This is the relevant model. It was created using flask-sqlacodegen and then modified by me to only use the relevant columns within the table. At models.py:
from flask_sqlalchemy import SQLAlchemy
from app import db
# coding: utf-8
t_aus_postmeta = db.Table(
"""
post_id: Order ID
meta_key: Type of value (client name, billing address)
meta_value: Value of meta_key (Name or address itself)
"""
'aus_postmeta',
#db.Column('meta_id', db.BigInteger, nullable=False),
db.Column('post_id', db.BigInteger, nullable=False, server_default=db.FetchedValue()),
db.Column('meta_key', db.String(255, 'utf8mb4_unicode_ci')),
db.Column('meta_value', db.String(collation='utf8mb4_unicode_ci'))
)
And finally, this is the file with the error, views.py. It's a blueprint already registered to __init__.py. I created it only with the intention of checking if I could run queries, but I don't really intend to render anything from Flask:
from flask import render_template
from . import main
from .. import db
from app.models import t_aus_postmeta
#main.route("/", methods=["GET"])
def index():
result = t_aus_postmeta.query_by(post_id=786).first()
This is the error I get: AttributeError: 'Table' object has no attribute 'query_by'
I think it's noteworthy that, although my linter doesn't complain due to unresolved imports, when I use t_aus_postmeta I don't get any method suggestions.
All the questions I've checked are based on using sqlalchemy instead of flask-sqlalchemy. What could be causing this error? At this point, I'm at a loss.
I don't think that's the right way to create your model. Instead you should create it as a class, which will inherit from db.Model, that contains your query_by method.
models.py
class t_aus_postmeta(db.Model):
"""
post_id: Order ID
meta_key: Type of value (client name, billing address)
meta_value: Value of meta_key (Name or address itself)
"""
__tablename__ = 'aus_postmeta'
post_id = db.Column(db.BigInteger(), nullable=False, server_default=db.FetchedValue())
# rest of your columns...
If you do it this way a valid query would look like this:
t_aus_postmeta.query.filter_by('post_id').first()
Notice that this includes tutiplain's suggestion. I think you got your method name wrong. It's just query followed by a filter_by!
I can't find the API reference for the "query_by" method you are using. It seems there is no such method. Perhaps you meant "filter_by" instead?
I want to encrypt data into Postgres and then decrypt and read from it. I prefer using sqlalchemy and ORM but if it is difficult to do using sqlalchemy and ORM then I am curious to know the other ways also
I tried using the below code, It is encrypting into the database but it is not asking me for any key or anything for the decryption. May I know why?
import sqlalchemy as sa
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy_utils import EncryptedType
from sqlalchemy_utils.types.encrypted.encrypted_type import AesEngine
secret_key = 'secretkey1234'
connection_string = '***********'
engine = create_engine(connection_string)
connection = engine.connect()
sa.orm.configure_mappers()
Session = sessionmaker(bind=connection)
session = Session()
Base = declarative_base()
class User(Base):
__tablename__ = "user"
id = sa.Column(sa.Integer, primary_key=True)
username = sa.Column(EncryptedType(sa.Unicode,secret_key,AesEngine,'pkcs5'))
number_of_accounts = sa.Column(EncryptedType(sa.Integer,secret_key,AesEngine,'oneandzeroes'))
Base.metadata.create_all(connection)
I run the below code for the decryption:
user_id = user.id
session.expunge_all()
user_instance = session.query(User).get(user_id)
print('username: {}'.format(user_instance.username))
You have likely figured this out by now, as this question is a few years old, but for anyone else looking:
You are interacting with your Postgres tables through the model classes you define (in your example User).
When you execute a query, data is returned and passed through the class to determine how to process the response. From your example a query will return results for id, username and number_of_accounts. If you were to log each element returned, id would be processed as an int because that is how it is defined in your model.
Similarly, username and number_of_accounts will also be processed based on the definition in the User class - as an EncryptedType() value. This is a more complex datatype though. Your model defines the key to use for encryption/decryption. Prior to storing the value, the results are decrypted based on the context provided in your model. In this case using the AESEngine and decrypted with the key of 'secretkey1234'. That is why you don't need to specify a key on read. It is already defined in your model.
If you were to run a select * from user limit 1; query directly on your Postgres db, the values displayed for your two encrypted columns would remain encrypted, as you would not be passing the results through your defined model.
Let's say I have a User model with attributes id, name, email and a relationship languages.
Is it possible to create a User instance from existing data that behaves like I would have queried it with dbsession.query(User).get(42)?
What I mean in particular is that I want that an access to user.languages creates a subquery and populates the attribute.
Here a code example:
I have a class User:
class User(Base):
id = Column(Integer, primary_key=True)
name = Column(String(64))
email = Column(String(64))
languages = relationship('Language', secondary='user_languages')
I already have a lot of users stored in my DB.
And I know that I have, for example, this user in my DB:
user_dict = {
'id': 23,
'name': 'foo',
'email': 'foo#bar',
}
So I have all the attributes but the relations.
Now I want to make a sqlalchemy User instance
and kind of register it in sqlalchemy's system
so I can get the languages if needed.
user = User(**user_dict)
# Now I can access the id, name email attributes
assert user.id == 23
# but since sqlalchemy thinks it's a new model it doesn't
# lazy load any relationships
assert len(user.languages) == 0
# I want here that the languages for the user with id 23 appear
# So I want that `user` is the same as when I would have done
user_from_db = DBSession.query(User).get(23)
assert user == user_from_db
The use-case is that I have a big model with lots of complex
relationships but 90% of the time I don't need the data from those.
So I only want to cache the direct attributes plus what else I need
and then load those from the cache like above and be able to
use the sqlalchemy model like I would have queried it from the db.
From the sqlalchemy mailing list:
# to make it look like it was freshly loaded from the db
from sqlalchemy.orm.session import make_transient_to_detached
make_transient_to_detached(user)
# merge instance in session without emitting sql
user = DBSession.merge(user, load=False)
This answer was extracted from the question
Imagine that I have one table in my project with some rows in it.
For example:
# -*- coding: utf-8 -*-
import sqlalchemy as sa
from app import db
class Article(db.Model):
__tablename__ = 'article'
id = sa.Column(sa.Integer, primary_key=True, autoincrement=True)
name = sa.Column(sa.Unicode(255))
content = sa.Column(sa.UnicodeText)
I'm using Flask-SQLAlchemy, so db.session is scoped session object.
I saw in https://github.com/zzzeek/sqlalchemy/blob/master/examples/versioned_history/history_meta.py
but i can't understand how to use it with my existing tables and anymore how to start it. (I get ArgumentError: Session event listen on a scoped_session requires that its creation callable is associated with the Session class. error when I pass db.session in versioned_session func)
From versioning I need the following:
1) query for old versions of object
2) query old versions by date range when they changed
3) revert old state to existing object
4) add additional info to history table when version is creating (for example editor user_id, date_edit, remote_ip)
Please, tell me what are the best practicies for my case and if you can add a little working example for it.
You can work around that error by attaching the event handler to the SignallingSession class[1] instead of the created session object:
from flask.ext.sqlalchemy import SignallingSession
from history_meta import versioned_session, Versioned
# Create your Flask app...
versioned_session(SignallingSession)
db = SQLAlchemy(app)
class Article(Versioned, db.Model):
__tablename__ = 'article'
id = sa.Column(sa.Integer, primary_key=True, autoincrement=True)
name = sa.Column(sa.Unicode(255))
content = sa.Column(sa.UnicodeText)
The sample code creates parallel tables with a _history suffix and an additional changed datetime column. Querying for old versions is just a matter of looking in that table.
For managing the extra fields, I would put them on your main table, and they'll automatically be kept track of in the history table.
[1] Note, if you override SQLAlchemy.create_session() to use a different session class, you should adjust the class you pass to versioned_session.
I think the problem is you're running into this bug: https://github.com/mitsuhiko/flask-sqlalchemy/issues/182
One workaround would be to stop using flask-sqlalchemy and configure sqlalchemy yourself.
I am working on a webapp in flask and using a services layer to abstract database querying and manipulation away from the views and api routes. Its been suggested that this makes testing easier because you can mock out the services layer, but I am having trouble figuring out a good way to do this. As a simple example, imagine that I have three SQLAlchemy models:
models.py
class User(db.Model):
id = db.Column(db.Integer, primary_key = True)
email = db.Column(db.String)
class Group(db.Model):
id = db.Column(db.Integer, primary_key = True)
name = db.Column
class Transaction(db.Model):
id = db.Column(db.Integer, primary_key = True)
from_id = db.Column(db.Integer, db.ForeignKey('user.id'))
to_id = db.Column(db.Integer, db.ForeignKey('user.id'))
group_id = db.Column(db.Integer, db.ForeignKey('group.id'))
amount = db.Column(db.Numeric(precision = 2))
There are users and groups, and transactions (which represent money changing hands) between users. Now I have a services.py that has a bunch of functions for things like checking if certain users or groups exist, checking if a user is a member of a particular group, etc. I use these services in an api route which is sent JSON in a request and uses it to add transactions to the db, something similar to this:
routes.py
import services
#app.route("/addtrans")
def addtrans():
# get the values out of the json in the request
args = request.get_json()
group_id = args['group_id']
from_id = args['from']
to_id = args['to']
amount = args['amount']
# check that both users exist
if not services.user_exists(to_id) or not services.user_exists(from_id):
return "no such users"
# check that the group exists
if not services.group_exists(to_id):
return "no such group"
# add the transaction to the db
services.add_transaction(from_id,to_id,group_id,amount)
return "success"
The problem comes when I try to mock out these services for testing. I've been using the mock library, and I'm having to patch the functions from the services module in order to get them to be redirected to mocks, something like this:
mock = Mock()
mock.user_exists.return_value = True
mock.group_exists.return_value = True
#patch("services.user_exists",mock.user_exists)
#patch("services.group_exists",mock.group_exists)
def test_addtrans_route(self):
assert "success" in routes.addtrans()
This feels bad for any number of reasons. One, patching feels dirty; two, I don't like having to patch every service method I'm using individually (as far as I can tell there's no way to patch out a whole module).
I've thought of a few ways around this.
Reassign routes.services so that it refers to my mock rather than the actual services module, something like: routes.services = mymock
Have the services be methods of a class which is passed as a keyword argument to each route and simply pass in my mock in the test.
Same as (2), but with a singleton object.
I'm having trouble evaluating these options and thinking of others. How do people who do python web development usually mock services when testing routes that make use of them?
You can use dependency injection or inversion of control to achieve a code much simpler to test.
replace this:
def addtrans():
...
# check that both users exist
if not services.user_exists(to_id) or not services.user_exists(from_id):
return "no such users"
...
with:
def addtrans(services=services):
...
# check that both users exist
if not services.user_exists(to_id) or not services.user_exists(from_id):
return "no such users"
...
what's happening:
you are aliasing a global as a local (that's not the important point)
you are decoupling your code from services while expecting the same interface.
mocking the things you need is much easier
e.g.:
class MockServices:
def user_exists(id):
return True
Some resources:
https://github.com/ivan-korobkov/python-inject
http://code.activestate.com/recipes/413268/
http://www.ninthtest.net/aglyph-python-dependency-injection/
You can patch out the entire services module at the class level of your tests. The mock will then be passed into every method for you to modify.
#patch('routes.services')
class MyTestCase(unittest.TestCase):
def test_my_code_when_services_returns_true(self, mock_services):
mock_services.user_exists.return_value = True
self.assertIn('success', routes.addtrans())
def test_my_code_when_services_returns_false(self, mock_services):
mock_services.user_exists.return_value = False
self.assertNotIn('success', routes.addtrans())
Any access of an attribute on a mock gives you a mock object. You can do things like assert that a function was called with the mock_services.return_value.some_method.return_value. It can get kind of ugly so use with caution.
I would also raise a hand for using dependency injection for such needs. You can use Dependency Injector to describe structure of your application using inversion of control container(s) to make it look like this:
"""Example of dependency injection in Python."""
import logging
import sqlite3
import boto3
import example.main
import example.services
import dependency_injector.containers as containers
import dependency_injector.providers as providers
class Core(containers.DeclarativeContainer):
"""IoC container of core component providers."""
config = providers.Configuration('config')
logger = providers.Singleton(logging.Logger, name='example')
class Gateways(containers.DeclarativeContainer):
"""IoC container of gateway (API clients to remote services) providers."""
database = providers.Singleton(sqlite3.connect, Core.config.database.dsn)
s3 = providers.Singleton(
boto3.client, 's3',
aws_access_key_id=Core.config.aws.access_key_id,
aws_secret_access_key=Core.config.aws.secret_access_key)
class Services(containers.DeclarativeContainer):
"""IoC container of business service providers."""
users = providers.Factory(example.services.UsersService,
db=Gateways.database,
logger=Core.logger)
auth = providers.Factory(example.services.AuthService,
db=Gateways.database,
logger=Core.logger,
token_ttl=Core.config.auth.token_ttl)
photos = providers.Factory(example.services.PhotosService,
db=Gateways.database,
s3=Gateways.s3,
logger=Core.logger)
class Application(containers.DeclarativeContainer):
"""IoC container of application component providers."""
main = providers.Callable(example.main.main,
users_service=Services.users,
auth_service=Services.auth,
photos_service=Services.photos)
Having this will give your a chance to override particular implementations later:
Services.users.override(providers.Factory(example.services.UsersStub))
Hope it helps.
#patch("dao.qualcomm_transaction_service.QualcommTransactionService.get_max_qualcomm_id",20)
def test_lambda_handler():
lambda_handler(event, None)
I used mocking seeing your example and my method expects to return 20 whenever in lambda function testing locally get_max_qualcomm_id us made .but on reaching the above method i get a exception int type object is not Callable. Please let me know what is the problem here .
This is actual method being call which i am trying to mock :
last_max_id = QualcommTransactionService().get_max_qualcomm_id(self.subscriber_id)