I'm running some Celery tasks and I'm trying to get a clear view when a Celery task has been completed, whether it succeeded or failed.
I have created a Backlog model:
class Backlog(models.Model):
""" Class designed to create backlogs. """
name = models.CharField(max_length=255, blank=True, null=True)
description = models.TextField(blank=True, null=True)
#Status
time = models.DateTimeField(blank=True, null=True)
has_succeeded = models.BooleanField(default=False)
I have created a Test task in tasks.py to see if I could get a result:
#shared_task(name='Test task', bind=True)
def test_task(self):
task_id = self.request.id
result = test_task.AsyncResult(task_id)
print('Task started with id:', task_id)
add(2, 2)
print(result.state)
if result.state == 'SUCCESS':
print('Task completed successfully')
response = {
'state': result.state,
'status': str(result.info),
'date': result.date_done,
}
Backlog.objects.create(name=f'Success in task {task_id}', description=str(result.info), time=result.date_done,
has_succeeded=True)
elif result.state == 'FAILURE':
print('Task is still running or failed with status:', result.state)
response = {
'state': result.state,
'status': str(result.info),
'date': result.date_done,
}
Backlog.objects.create(name=f'Error in task {task_id}', description=str(result.info), time=result.date_done,
has_succeeded=False)
And I just test with a simple function:
def add(x, y):
return x + y
I get the following traceback in the worker:
[2023-02-01 18:47:02,580: WARNING/ForkPoolWorker-6] Task is still running or failed with status:
[2023-02-01 18:47:02,582: WARNING/ForkPoolWorker-6]
[2023-02-01 18:47:02,582: WARNING/ForkPoolWorker-6] STARTED
[2023-02-01 18:47:02,585: INFO/ForkPoolWorker-6] Task Test task[06f71596-3082-4012-8ea8-05a22d57ca89] succeeded in 0.0534969580003235s: None
I can't manage to catch the exact moment where the state becomes SUCCESS since I only get the STARTED state.
What am I missing?
Related
During the running test case of my application, I keep on seeing test failed even when the data is well formatted. The following are my code snippets:
I was able to create a new user through its application interface, but trying its behaviour, it was giving me an unexpected error status code, 422. I don't really know what was going wrong with the snippets. Here I have included all the following code for better look into the stated issue.
For the endpoint /users
#app.route('/users/, methods=['POST'])
def create_user():
try:
body = request.get_json((
new_user = body get('user_name', None)
if user_name is None:
about(405)
new_entry = User(new_user)
new_entry.insert()
return jsonify({
'success': True
)}
except:
abort(422)
Here is my model_class:
class User(db.Model):
__tablename__='users'
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False)
score = Column(Integer, nullable=False)
def __init__(self, name, score=0):
self.name = name
self.score = score
def insert(self):
db.session.add(self)
db.session.commit()
def format(self):
return {
'id': self.id,
'name': self.name,
'score': self.score
And here is my test_case_file:
class TriviaTestCase(unit test.TestCase):
def setUp(self):
self.app = create_app()
self.client = self.app.test_client
self.database_path = "PostgreSQL://postgres:postgresspass#localhos:5432/user_test_db"
setup_db(self.app, self database_path)
self.new_user = {
"username":"P.Son",
"score": 0
}
def test_create_user(self):
res = self.client().post("/users", json=self.new_user)
data = json.loads(res.data)
self.assertEqual(res.status_code, 200)
self.assertEqual(data['success'], True)
if __name__=="__main__":
unittest.main()
Output of my test:
============================
FAIL: test_create_user (__main__.TriviaTestCase)
----------------------------
Traceback (most recent call last):
File "C:\...\test_flaskr.py", line 201, in test_create_user
self.assertEqual(res.status_code, 200)
AssertionError: 422 != 200
Note Other endpoints pass the test but the above endpoint has been failing test. I don't know what was wrong with the snippets I have written.
First of all, you should always be logging such errors when they happen in your endpoints.
Second, you are returning the 422 error yourself in case of "any" exception and there is some in your code. One of which is the fact that you are just passing user_name to the constructor of the User class but there must be a score variable present too.
Another problem is that you should pass key-value arguments to the User class constructor, not a dictionary.
is should be something like this:
body = request.get_json()
user_name = body.get('user_name', '')
score = body.get('score', '')
if not all([user_name, score]):
# return some customized error here
new_entry = User(user_name=user_name, score=score)
Also, you should consider some form of input validation rather than just checking the data yourself. Something like Marshmellow would be fine.
Im trying to set up a consumer test with Pact, but Im struggling. If someone could help me where Im going wrong it would be appreciated.
The file I am trying to test is as follows:
import requests
from orders_service.exceptions import (
APIIntegrationError,
InvalidActionError
)
class OrderItem:
def __init__(self, id, product, quantity, size):
self.id = id
self.product = product
self.quantity = quantity
self.size = size
def dict(self):
return {
'product': self.product,
'size': self.size,
'quantity': self.quantity
}
class Order:
def __init__(self, id, created, items, status, schedule_id=None,
delivery_id=None, order_=None):
self._order = order_
self._id = id
self._created = created
self.items = [OrderItem(**item) for item in items]
self.status = status
self.schedule_id = schedule_id
self.delivery_id = delivery_id
#property
def id(self):
return self._id or self._order.id
#property
def created(self):
return self._created or self._order.created
#property
def status(self):
return self._status or self._order.status
def cancel(self):
if self.status == 'progress':
response = requests.get(
f'http://localhost:3001/kitchen/schedule/{self.schedule_id}/cancel',
data={'order': self.items}
)
if response.status_code == 200:
return
raise APIIntegrationError(
f'Could not cancel order with id {self.id}'
)
if self.status == 'delivery':
raise InvalidActionError(f'Cannot cancel order with id {self.id}')
def pay(self):
response = requests.post(
'http://localhost:3001/payments', data={'order_id': self.id}
)
if response.status_code == 200:
return
raise APIIntegrationError(
f'Could not process payment for order with id {self.id}'
)
def schedule(self):
response = requests.post(
'http://localhost:3000/kitchen/schedule',
data={'order': [item.dict() for item in self.items]}
)
if response.status_code == 201:
return response.json()['id']
raise APIIntegrationError(
f'Could not schedule order with id {self.id}'
)
def dict(self):
return {
'id': self.id,
'order': [item.dict() for item in self.items],
'status': self.status,
'created': self.created,
}
The consumer test I just can't get it to stage where it is publishing the contract. There are 2 areas Im not too familiar with firstly the python fixture. Im really unsure what needs to go here or how to do that and lastly the "consumer.cancel()" at the very bottom of the test.
Some help getting me set up and one the way would be greatly appreciated. Here is what I wrote for the test:
import atexit
from datetime import datetime
import logging
import os
from uuid import UUID
import requests
import pytest
import subprocess
from pact import Consumer, Like, Provider, Term, Format
from orders_service.orders import Order, OrderItem
log = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
# If publishing the Pact(s), they will be submitted to the Pact Broker here.
# For the purposes of this example, the broker is started up as a fixture defined
# in conftest.py. For normal usage this would be self-hosted or using Pactflow.
PACT_BROKER_URL = "https://xxx.pactflow.io/"
PACT_BROKER_USERNAME = xxx
PACT_BROKER_PASSWORD = xxx
# Define where to run the mock server, for the consumer to connect to. These
# are the defaults so may be omitted
PACT_MOCK_HOST = "localhost"
PACT_MOCK_PORT = 1234
# Where to output the JSON Pact files created by any tests
PACT_DIR = os.path.dirname(os.path.realpath(__file__))
#pytest.fixture
def consumer() -> Order.cancel:
# return Order.cancel("http://{host}:{port}".format(host=PACT_MOCK_HOST, "port=PACT_MOCK_PORT))
order = [OrderItem(**{"id":1, "product":"coffee", "size":"big", "quantity":2})]
payload = Order(id=UUID, created=datetime.now, items=order, status="progress")
return Order.cancel(payload)
#pytest.fixture(scope="session")
def pact(request):
"""Setup a Pact Consumer, which provides the Provider mock service. This
will generate and optionally publish Pacts to the Pact Broker"""
# When publishing a Pact to the Pact Broker, a version number of the Consumer
# is required, to be able to construct the compatability matrix between the
# Consumer versions and Provider versions
# version = request.config.getoption("--publish-pact")
# publish = True if version else False
pact = Consumer("UserServiceClient", version=1).has_pact_with(
Provider("UserService"),
host_name=PACT_MOCK_HOST,
port=PACT_MOCK_PORT,
pact_dir=PACT_DIR,
publish_to_broker=True,
broker_base_url=PACT_BROKER_URL,
broker_username=PACT_BROKER_USERNAME,
broker_password=PACT_BROKER_PASSWORD,
)
pact.start_service()
# Make sure the Pact mocked provider is stopped when we finish, otherwise
# port 1234 may become blocked
atexit.register(pact.stop_service)
yield pact
# This will stop the Pact mock server, and if publish is True, submit Pacts
# to the Pact Broker
pact.stop_service()
# Given we have cleanly stopped the service, we do not want to re-submit the
# Pacts to the Pact Broker again atexit, since the Broker may no longer be
# available if it has been started using the --run-broker option, as it will
# have been torn down at that point
pact.publish_to_broker = False
def test_cancel_scheduled_order(pact, consumer):
expected = \
{
"id": "1e54e244-d0ab-46ed-a88a-b9e6037655ef",
"order": [
{
"product": "coffee",
"quantity": 1,
"size": "small"
}
],
"scheduled": "Wed, 22 Jun 2022 09:21:26 GMT",
"status": "cancelled"
}
(pact
.given('A scheduled order exists and it is not cancelled already')
.upon_receiving('a request for cancellation')
.with_request('get', f'http://localhost:3001/kitchen/schedule/{Like(12343)}/cancel')
.will_respond_with(200, body=Like(expected)))
with pact:
payload = Order(UUID, datetime.now, {"product":"coffee", "size":"large", "quantity":1}, "progress")
print(payload)
response = consumer.cancel(payload)
assert response['status'] == "cancelled"
pact.verify()
Also I originally had(adapted from the example in pact):
# return Order.cancel("http://{host}:{port}".format(host=PACT_MOCK_HOST, "port=PACT_MOCK_PORT))
but i'm not sure how that works
Thanks for helping me
There are a couple of issues here:
.with_request('get', f'http://localhost:3001/kitchen/schedule/{Like(12343)}/cancel')
The Like matcher is a function that returns an object. Adding this within a string is likely to cause issues when it is stringified
You don't need to put the protocol and host portion here - just the path e.g.:
.with_request(method='GET', path='/kitchen/schedule/bc72e917-4af1-4e39-b897-1eda6d006b18/cancel', headers={'Content-Type': 'application/json'} ...)
If you want to use a matcher on the path, it needs to be on the string as a whole e.g. Regex('/kitchen/schedule/([0-9]+)/cancel') (this is not a real regex, but hopefully you get the idea).
I can’t see in this code where it calls the actual mock service. I’ve removed the commented items for readability:
(pact
.given('A scheduled order exists and it is not cancelled already')
.upon_receiving('a request for cancellation')
.with_request(method='GET', path='/kitchen/schedule/bc72e917-4af1-4e39-b897-1eda6d006b18/cancel', headers={'Content-Type': 'application/json'},)
.will_respond_with(200, body=Like(expected)))
with pact:
# this needs to be sending a request to
# http://localhost:1234/kitchen/schedule/bc72e917-4af1-4e39-b897-1eda6d006b18/cancel
response = consumer.cancel()
pact.verify()
The definition of the function you are calling doesn't make any HTTP request to the pact mock service, it just returns a canned response.
#pytest.fixture
def consumer() -> Order.cancel:
# return Order.cancel("http://{host}:{port}".format(host=PACT_MOCK_HOST, "port=PACT_MOCK_PORT))
order = [OrderItem(**{"id":1, "product":"coffee", "size":"big", "quantity":2})]
payload = Order(id=UUID, created=datetime.now, items=order, status="progress")
return Order.cancel(payload)
For a Pact test to pass, you need to demonstrate your code actually calls the correct HTTP endpoints with the right data, and that your code can handle it.
So I am extremely new to programming, and am stuck at this issue, I am using python with Django and Mongodb for database. I need to write a service that assigns an ID (not the one assigned by mongodb) upon each user form submission. for example entry 1's ID will be [Prefix entered by user] 2101, entry 2's ID will be [Prefix entered by user] 2102, so its basically adding in the number 2100.
I have no idea how and where to integrate this logic in my code. I have tried a few solutions on the internet but nothing seems to work.
my code:
Model.py
class Writeups(Document):
blog_id = 2100
title = fields.StringField(max_length=120)
date_created = fields.DateField(blank=True, null=True)
date_modified = fields.DateField(blank=True, null=True)
version_number = fields.DecimalField(null= True , max_digits=1000, decimal_places=2)
storage_path = fields.StringField(max_length=120)
STRIKE_READY_BRIEF = 'SRB'
STRIKE_READY_THREAT_REPORT = 'SRTR'
PREFIX_CHOICES = [
(STRIKE_READY_BRIEF, 'SRB'),
(STRIKE_READY_THREAT_REPORT, 'SRTR'),
]
prefix = fields.StringField(
max_length=4,
choices=PREFIX_CHOICES,
null=False,
blank=False,
)
views.py:
#csrf_exempt
def writeups_request(request):
"""
Writeup Request
"""
if request.method == 'GET':
try:
data = {
'form-TOTAL_FORMS': '1',
'form-INITIAL_FORMS': '0',
}
writeups = WriteupsFormset(data)
# print(writeup)
return render(request, "writeups/writeups.html", {'writeups_forms': writeups})
except Exception as e:
print(e)
response = {"error": "Error occurred"}
return JsonResponse(response, safe=False)
if request.method == 'POST':
writeup_data = WriteupsFormset(request.POST)
if writeup_data.is_valid():
flag = False
logs = []
for writeups_data in writeup_data:
print(writeups_data)
if writeups_data.cleaned_data.get('DELETE'): # and malware_data._should_delete_form(form):
continue
title = writeups_data.cleaned_data.get('title')
date_created = writeups_data.cleaned_data.get('date_created')
date_modified = writeups_data.cleaned_data.get('date_modified')
version_number = writeups_data.cleaned_data.get('version_number')
storage_path = writeups_data.cleaned_data.get('storage_path')
prefix = writeups_data.cleaned_data.get('prefix')
try:
writeups = Writeups(),
title=title,
date_created=date_created,
date_modified=date_modified,
version_number=version_number,
storage_path=storage_path,
prefix=prefix)
writeups.save()
In order to implement your custom script at all submissions, Django's Signals are the solution for your use case. Look into post_save and pre-save signals and use them as per your problem.
https://docs.djangoproject.com/en/3.2/ref/signals/
If you have an existing database and you want a script to iterate through it and update the dataset, you can take a look at Management Commands
https://docs.djangoproject.com/en/4.0/howto/custom-management-commands/
I integrated scrapy in my Django project following this guide
Unfortunately, In any way I try, the spider jobs are not starting, even if schedule.json gives me a jobid in return.
My views:
#csrf_exempt
#api_view(['POST'])
def crawl_url(request):
url = request.POST.get('url', None) # takes url from request
if not url:
return JsonResponse({'error': 'Missing args'})
if not is_valid_url(url):
return JsonResponse({'error': 'URL is invalid'})
domain = urlparse(url).netloc # parse the url and extract the domain
unique_id = str(uuid4()) # creates a unique ID.
# Custom settings for scrapy spider.
# We can send anything we want to use it inside spiders and pipelines.
settings = {
'unique_id': unique_id, # unique ID for each record for DB
'USER_AGENT': 'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)'
}
# Schedule a new crawling task from scrapyd.
# settings is a special argument name.
# This returns an ID which belongs to this task, used to check the task status
task = scrapyd.schedule('default', 'kw_spider', settings=settings, url=url, domain=domain)
return JsonResponse({'task_id': task, 'unique_id': unique_id, 'status': 'started'})
#csrf_exempt
#api_view(['GET'])
def get_crawl_data(request):
task_id = request.GET.get('task_id', None)
unique_id = request.GET.get('unique_id', None)
if not task_id or not unique_id:
return JsonResponse({'error': 'Missing args'})
# Check status of crawling
# If finished, makes query from database and get results
# If not, return active status
# Possible results are -> pending, running, finished
status = scrapyd.job_status('default', task_id)
if status == '' or status is None:
return JsonResponse({
'status': 'error',
'data': 'Task not found'
})
elif status == 'finished':
try:
item = ScrapyItem.objects.get(unique_id=unique_id)
return JsonResponse({
'status': status,
'data': item.to_dict['data']
})
except Exception as e:
return JsonResponse({
'status': 'error',
'data': str(e)
})
else:
return JsonResponse({
'status': status,
'data': {}
})
My spider:
class KwSpiderSpider(CrawlSpider):
name = 'kw_spider'
def __init__(self, *args, **kwargs):
# __init__ overridden to have a dynamic spider
# args passed from django views
self.url = kwargs.get('url')
self.domain = kwargs.get('domain')
self.start_urls = [self.url]
self.allowed_domains = [self.domain]
KwSpiderSpider.rules = [
Rule(LinkExtractor(unique=True), callback='parse_item'),
]
super(KwSpiderSpider, self).__init__(*args, **kwargs)
def parse_item(self, response):
resp_dict = {
'url': response.url
}
# resp_dict['domain_id'] = response.xpath('//input[#id="sid"]/#value').extract()
# resp_dict['name'] = response.xpath('//div[#id="name"]').extract()
# resp_dict['description'] = response.xpath('//div[#id="description"]').extract()
return resp_dict
I also tried with a curl call
curl http://localhost:6800/schedule.json -d project=default -d spider=kw_spider
which gave me the following response:
{"node_name": "9jvtf82", "status": "ok", "jobid": "0ca057026e5611e8898f64006a668b22"}
But nothing happens, the job doesn't start
I solved it by noticing an error in the scrapyd console log.
I was missing the pywin32 library, though I don't understand why this wasn't in the requirements.
A simple
pip install pywin32
fixed it
I have a time-consuming operation to modify the database which is triggered by request the route,So I want to perform the operation asynchronously without waiting for it to end,and return the state directly. I tried to use threading module in the route,but got this error:
The following is the code:
define a model:
class Plan(db.Model):
__tablename__ = 'opt_version_plans'
planid = db.Column(db.Integer,primary_key=True)
planname = db.Column(db.String(255))
jobname = db.Column(db.String(255))
branch = db.Column(db.String(32))
define function:
def test():
time.sleep(20)
p = Plan.query.get(1)
print p.planname
route:
#app.route('/')
def index():
t = threading.Thread(target=test)
t.start()
return "helloworld"
How can I achieve my needs in this way?