I am using flask-restful this is
My class I want to insert
class OrderHistoryResource(Resource):
model = OrderHistoryModel
schema = OrderHistorySchema
order = OrderModel
product = ProductModel
def post(self):
value = req.get_json()
data = cls.schema(many=True).load(value)
data.insert()
In my model
def insert(self):
db.session.add(self)
db.session.commit()
schema
from config.ma import ma
from model.orderhistory import OrderHistoryModel
class OrderHistorySchema(ma.ModelSchema):
class Meta:
model = OrderHistoryModel
include_fk = True
Example Data I want to insert
[
{
"quantity":99,
"flaskSaleStatus":true,
"orderId":"ORDER_64a79028d1704406b6bb83b84ad8c02a_1568776516",
"proId":"PROD_9_1568779885_64a79028d1704406b6bb83b84ad8c02a"
},
{
"quantity":89,
"flaskSaleStatus":true,
"orderId":"ORDER_64a79028d1704406b6bb83b84ad8c02a_1568776516",
"proId":"PROD_9_1568779885_64a79028d1704406b6bb83b84ad8c02a"
}
]
this is what i got after insert method has started
TypeError: insert() takes exactly 2 arguments (0 given)
or there is another way to do this action?
Edited - released marshmallow-sqlalchemy loads directly to instance
You need to loop through the OrderModel instances in your list.
You can then use add_all to add the OrderModel objects to the session, then bulk update - see the docs
Should be something like:
db.session.add_all(data)
db.session.commit()
See this post for brief discussion on why add_all is best when you have complex ORM relationships.
Also - not sure you need to have all your models/schemas as class variables, it's fine to have them imported (or just present in the same file, as long as they're declared before the resource class).
You are calling insert on list cause data is list of model OrderHistoryModel instances.
Also post method doesn't need to be classmethod and you probably had an error there as well.
Since data is list of model instances you can use db.session.add_all method to add them to session in bulk.
def post(self):
value = req.get_json()
data = self.schema(many=True).load(value)
db.session.add_all(data)
db.session.commit()
Related
I have a field on a schema that should be a specific schema based on a value in that field if it exists. To elaborate, the field should be fields.Dict() if it doesn't contain a version property. Otherwise, the schema should be retrieved from a map
def pricing_schema_serialization(base_object, parent_obj):
# Use dictionary access to support using this function to look up schema for both both serialization directions,
# and test data
if type(base_object) is not dict:
object_dict = base_object.__dict__
else:
object_dict = base_object
pricing_version = object_dict.get("version", None)
if pricing_version in version_to_schema:
return version_to_schema[pricing_version]() # Works fine
return fields.Dict(missing=dict, default=dict) # Produces error
class Product:
# pricing = fields.Dict(missing=dict, default=dict) # This worked
pricing = PolyField(
serialization_schema_selector=pricing_schema_serialization,
deserialization_schema_selector=pricing_schema_serialization,
required=False,
missing=dict,
default=dict,
)
This is the error produced:
{'pricing': ["Unable to use schema. Ensure there is a deserialization_schema_selector and that it returns a schema when the function is passed in {}. This is the class I got. Make sure it is a schema: <class 'marshmallow.fields.Dict'>"]}
I saw that marhmallow 3.x has a from_dict() method. Unfortunately, we're stuck on 2.x.
Things I tried:
The code above
Creating my own from_dict class method following the same implementation here. Same result
Question
How can I build a Model that that stores one field in the database, and then retrieves other fields from an API behind-the-scenes when necessary?
Details:
I'm trying to build a Model called Interviewer that stores an ID in the database, and then retrieves name from an external API. I want to avoid storing a copy of name in my app's database. I also want the fields to be retrieved in bulk rather than per model instance because these will be displayed in a paginated list.
My first attempt was to create a custom Model Manager called InterviewManager that overrides get_queryset() in order to set name on the results like so:
class InterviewerManager(models.Manager):
def get_queryset(self):
query_set = super().get_queryset()
for result in query_set:
result.name = 'Mary'
return query_set
class Interviewer(models.Model):
# ID provided by API, stored in database
id = models.IntegerField(primary_key=True, null=False)
# Fields provided by API, not in database
name = 'UNSET'
# Custom model manager
interviewers = InterviewerManager()
However, it seems like the hardcoded value of Mary is only present if the QuerySet is not chained with subsequent calls. I'm not sure why. For example, in the django shell:
>>> list(Interviewer.interviewers.all())[0].name
'Mary' # Good :)
>>> Interviewer.interviewers.all().filter(id=1).first().name
'UNSET' # Bad :(
My current workaround is to build a cache layer inside of InterviewManager that the model accesses like so:
class InterviewerManager(models.Manager):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.api_cache = {}
def get_queryset(self):
query_set = super().get_queryset()
for result in query_set:
# Mock querying a remote API
self.api_cache[result.id] = {
'name': 'Mary',
}
return query_set
class Interviewer(models.Model):
# ID provided by API, stored in database
id = models.IntegerField(primary_key=True, null=False)
# Custom model
interviewers = InterviewerManager()
# Fields provided by API, not in database
#property
def name(self):
return Interviewer.interviewers.api_cache[self.id]['name']
However this doesn't feel like idiomatic Django. Is there a better solution for this situation?
Thanks
why not just make the API call in the name property?
#property
def name(self):
name = get_name_from_api(self.id)
return name
If that isnt possible by manipulating a get request where you can add a list of names and recieve the data. The easy way is to do it is in a loop.
I would recommand you to build a so called proxy where you load the articles in a dataframe/dict, save this varible data ( with for example pickle ) and use it when nessary. It reduces loadtimes and is near efficient.
I want to create a viewset/apiview with a path like this: list/<slug:entry>/ that once I provide the entry it will check if that entry exists in the database.
*Note: on list/ I have a path to a ViewSet. I wonder if I could change the id with the specific field that I want to check, so I could see if the entry exists or not, but I want to keep the id as it is, so
I tried:
class CheckCouponAPIView(APIView):
def get(self, request, format=None):
try:
Coupon.objects.get(coupon=self.kwargs.get('coupon'))
except Coupon.DoesNotExist:
return Response(data={'message': False})
else:
return Response(data={'message': True})
But I got an error: get() got an unexpected keyword argument 'coupon'.
Here's the path: path('check/<slug:coupon>/', CheckCouponAPIView.as_view()),
Is there any good practice that I could apply in my situation?
What about trying something like this,
class CheckCouponAPIView(viewsets.ModelViewSet):
# other fields
lookup_field = 'slug'
From the official DRF Doc,
lookup_field - The model field that should be used to for performing
object lookup of individual model instances. Defaults to pk
For the life of me I cannot figure this out and I'm having trouble finding information on it.
I have a Django view which accepts an argument which is a primary key (e.g: URL/problem/12) and loads a page with information from the argument's model.
I want to mock models used by my view for testing but I cannot figure it out, this is what I've tried:
#patch('apps.problem.models.Problem',)
def test_search_response(self, problem, chgbk, dispute):
problem(problem_id=854, vendor_num=100, chgbk=122)
request = self.factory.get(reverse('dispute_landing:search'))
request.user = self.user
request.usertype = self.usertype
response = search(request, problem_num=12)
self.assertTemplateUsed('individual_chargeback_view.html')
However - I can never get the test to actually find the problem number, it's as if the model does not exist.
I think that's because if you mock the entire model itself, the model won't exist because any of the functions to create/save it will have been mocked. If Problem is just a mock model class that hasn't been modified in any way, it knows nothing about interacting with the database, the ORM, or anything that could be discoverable from within your search() method.
One approach you could take rather than mocking models themselves would be to create FactoryBoy model factories. Since the test database is destroyed with each test run, these factories are a great way to create test data:
http://factoryboy.readthedocs.io/en/latest/
You could spin up a ProblemFactory like so:
class ProblemFactory(factory.Factory):
class Meta:
model = Problem
problem_id = factory.Faker("pyint")
vendor_num = factory.Faker("pyint")
chgbk = factory.Faker("pyint")
Then use it to create a model that actually exists in your database:
def test_search_response(self, problem, chgbk, dispute):
problem = ProblemFactory(problem_id=854, vendor_num=100, chgbk=122)
request = self.factory.get(reverse('dispute_landing:search', kwargs={'problem_id':problem.id}))
request.user = self.user
request.usertype = self.usertype
response = search(request, problem_num=854)
self.assertTemplateUsed('individual_chargeback_view.html')
I am facing a very weird issue today.
Here is my serializer class.
class Connectivity(serializers.Serializer):
device_type = serializers.CharField(max_length=100,required=True)
device_name = serializers.CharField(max_length=100,required=True)
class Connections(serializers.Serializer):
device_name = serializers.CharField(max_length=100,required=True)
connectivity = Connectivity(required = True, many = True)
class Topologyserializer(serializers.Serializer):
name = serializers.CharField(max_length=100,required=True, \
validators=[UniqueValidator(queryset=Topology.objects.all())])
json = Connections(required=True,many=True)
def create(self, validated_data):
return validated_data
I am calling Topologyserializer from a Django view and I am passing a json like:
{
"name":"tokpwol",
"json": [
]
}
As per my experience with DRF since I have mentioned required = True in json field it should not accept the above json.
But I am able to create record.
Can anyone suggest me why it is not validating the json field and how it accepting empty list as json field?
I am using django rest framework 3.0.3
DRF does not clearly state what required stands for for lists.
In its code, it appears that validation passes as long as a value is supplied, even if that value is an empty list.
If you want to ensure the list is not empty, you'll need to validate its content manually. You would do that by adding the following method on your TopologySerializer:
def validate_json(self, value):
if not value:
raise serializers.ValidationError("Connections list is empty")
return value
I cannot test it right now, but it should work.