Note: Using flask_sqlalchemy here
I'm working on adding versioning to multiple services on the same DB. To make sure it works, I'm adding unit tests that confirm I get an error (for this case my error should be StaleDataError). For other services in other languages, I pulled the same object twice from the DB, updated one instance, saved it, updated the other instance, then tried to save that as well.
However, because SQLAlchemy adds a fake-cache layer between the DB and the service, when I update the first object it automatically updates the other object I hold in memory. Does anyone have a way around this? I created a second session (that solution had worked in other languages) but SQLAlchemy knows not to hold the same object in two sessions.
I was able to manually test it by putting time.sleep() halfway through the test and manually changing data in the DB, but I'd like a way to test this using just the unit code.
Example code:
def test_optimistic_locking(self):
c = Customer(formal_name='John', id=1)
db.session.add(c)
db.session.flush()
cust = Customer.query.filter_by(id=1).first()
db.session.expire(cust)
same_cust = Customer.query.filter_by(id=1).first()
db.session.expire(same_cust)
same_cust.formal_name = 'Tim'
db.session.add(same_cust)
db.session.flush()
db.session.expire(same_cust)
cust.formal_name = 'Jon'
db.session.add(cust)
with self.assertRaises(StaleDataError): db.session.flush()
db.session.rollback()
It actually is possible, you need to create two separate sessions. See the unit test of SQLAlchemy itself for inspiration. Here's a code snippet of one of our unit tests written with pytest:
def test_article__versioning(connection, db_session: Session):
article = ProductSheetFactory(title="Old Title", version=1)
db_session.refresh(article)
assert article.version == 1
db_session2 = Session(bind=connection)
article2 = db_session2.query(ProductSheet).get(article.id)
assert article2.version == 1
article.title = "New Title"
article.version += 1
db_session.commit()
assert article.version == 2
with pytest.raises(sqlalchemy.orm.exc.StaleDataError):
article2.title = "Yet another title"
assert article2.version == 1
article2.version += 1
db_session2.commit()
Hope that helps. Note that we use "version_id_generator": False in the model, that's why we increment the version ourselves. See the docs for details.
For anyone that comes across this question, my current hypothesis is that it can't be done. SQLAlchemy is incredibly powerful and, given that the functionality is so good that we can't test this line, we should trust that it works as expected
Related
i have a initial_data command in my project and when i try to create my test database and run this command
"py manage.py initial_data" it's not working because this command makes data in main database and not test_database, actually can't understand which one is test_database.
so i give up to use this command,and i have other method to resolve my problem .
and i decided to create manually my initial_data.
initial_data command actually makes my required data in my other tables.
and why i try to create data?
because i have greater than 500 tests and this tests required to other test datas.
so i have this tests:
class BankServiceTest(TestCase):
def setUp(self):
self.bank_model = models.Bank
self.country_model = models.Country
def test_create_bank_object(self):
self.bank_model.objects.create(name='bank')
re1 = self.bank_model.objects.get(name='bank')
self.assertEqual(re1.name, 'bank')
def test_create_country_object(self):
self.country_model.objects.create(name='country', code=100)
bank = self.bank_model.objects.get(name='bank')
re1 = self.country_model.objects.get(code=100)
self.assertEqual(re1.name, 'country')
i want after running "test create bank object " to have first function datas in second function
actually this method Performs the command "initial_data".
so what is the method for this problem.
i used unit_test.TestCase but it not working for me.
this method actually makes test data in main data base and i do not want this method.
or maybe i used bad of this method.
actually i need to keep my data in running test.
Below is my current code. It connects successfully to the organization. How can I fetch the results of a query in Azure like they have here? I know this was solved but there isn't an explanation and there's quite a big gap on what they're doing.
from azure.devops.connection import Connection
from msrest.authentication import BasicAuthentication
from azure.devops.v5_1.work_item_tracking.models import Wiql
personal_access_token = 'xxx'
organization_url = 'zzz'
# Create a connection to the org
credentials = BasicAuthentication('', personal_access_token)
connection = Connection(base_url=organization_url, creds=credentials)
wit_client = connection.clients.get_work_item_tracking_client()
results = wit_client.query_by_id("my query ID here")
P.S. Please don't link me to the github or documentation. I've looked at both extensively for days and it hasn't helped.
Edit: I've added the results line that successfully gets the query. However, it returns a WorkItemQueryResult class which is not exactly what is needed. I need a way to view the column and results of the query for that column.
So I've figured this out in probably the most inefficient way possible, but hope it helps someone else and they find a way to improve it.
The issue with the WorkItemQueryResult class stored in variable "result" is that it doesn't allow the contents of the work item to be shown.
So the goal is to be able to use the get_work_item method that requires the id field, which you can get (in a rather roundabout way) through item.target.id from results' work_item_relations. The code below is added on.
for item in results.work_item_relations:
id = item.target.id
work_item = wit_client.get_work_item(id)
fields = work_item.fields
This gets the id from every work item in your result class and then grants access to the fields of that work item, which you can access by fields.get("System.Title"), etc.
I'm learning BigQuery API using Python Client Libraries v0.28
https://googlecloudplatform.github.io/google-cloud-python/latest/bigquery/usage.html#run-a-simple-query
Wrote this simple code to fetch data from the table
1) Create client object
client_ = bigquery.Client.from_service_account_json('/Users/xyz/key.json')
2) Begin new Async query job
QUERY = 'SELECT visitid FROM `1234567.ga_sessions_20180101`'
query_job = client_.query(QUERY
, job_id=str(uuid.uuid4()))
3) poll until the query is DONE
while (query_job.state == 'RUNNING'):
time.sleep(5)
query_job.reload()
4) Fetch the results in iteration
query_job.reload()
iter = query_job.result()
At this stage I'd like to fetch how many rows are in the table. As per the doc GitHub code iter is of type bigquery.table.RowIterator with a property [tier.total_rows][1]
5) However, at this stage when I print:
print(iter.total_rows)
It keeps returning None
I'm pretty sure this table is NOT empty an dry query is correctly formatted!
Any help to any pointers what am I missing here will be really helpful... Thanks a lot!
Cheers!
You need to also check query_job.error_result to make sure query succeeded.
You can also see your job in the UI, which can be useful for debugging, using project id and job id:
https://bigquery.cloud.google.com/results/projectid:jobid
Also, query_job.result() already waits for the job completion so you don't need to poll.
The current behavior of how RowIterator returns None is indeed perplexing. Luckily, according to this issue, tswast's comment from 10 days ago indicates that the developers are working on a better solution.
Current awkward behavior of .total_rows
Currently, .total_rows is initialized only once iteration begins. (In what follows, for clarity I renamed your iter variable to row_iter.)
row_iter = query_job.result()
itr = iter(row_iter)
first_row = next(itr)
print(row_iter.total_rows) # Now you get a number instead of None.
This is ugly because to continue the iteration, we must either handle the first row differently or call row_iter = query_job.result() again.
Temporary workaround
A currently-working alternative is to use the value of query_job._query_results.total_rows. Unfortunately this is cheating because _query_results is private, so there is no reason to expect that this will work in the future.
Future behavior
If tswast's proposal is implemented, then row_iter.total_rows will be initialized at the beginning, just as you expect.
Suggestion
In my code, I'm going to use something like
try:
num_rows = row_iter.total_rows or query_job._query_results.total_rows
except NameError:
num_rows = None
to be compatible with future behavior while falling-back to the temporary workaround if necessary.
My question is actually a design problem. I am using python + selenium for automation. PyUnit is the unit framework used. I have a sheet in an excel where I have 2 columns- TestCaseID and Run. TestCaseID will have the testcase id in it and Run will have either Y or N signifying whether that test case should be runnable or not. What I am trying to do is read a particular test case Id from this sheet and see what does it have as it's run value, Y or N. If it is Y, then this test case will be executed by the unit framework else it will not be run.
Here is an excerpt of a test case that I have written:
`class Test_ID_94017(unittest.TestCase):
ex = Excel()
def setUp(self):
self.ex.setUpASheetInExcel('Select_Test_Cases_To_Run')
if self.ex.getTestCaseRunStatusFromExcel("94017") == "Y":
self.driver = Browser().createBrowserDriver()
self.driver.maximize_window()
self.driver.implicitly_wait(15)
self.ex.setUpASheetInExcel('Login')
else:
return
def test_CCPD_Regression001_PER_Custom_Check(self):
//do something
The definition for getTestCaseRunStatusFromExcel(testCaseId) method is:
`def getTestCaseRunStatusFromExcel(self, testCaseId):
i=1
while self.workSheet.cell_value(i,0).value != testCaseId:
i+=1
return self.workSheet.cell_value(i,1).value
Here are the problems I am facing:
How should I give a condition in my existing code so that only for Y, my test case executes? Should I give an if condition in setUp method for that test case class as I have mentioned in my code above?
Is the way in which I am trying to iterate over the rows in a column (of the excel sheet) correct until I find my testcase ID and get its corresponding Run value (Y or N)?
Please help!
I implemented similar design for selenium+testng suite. After reading the execution flags from excel, i create the testng.xml dynamically and run the same xml. TestNG provides features for xml to be created and executed dynamically. Not sure if you have anything similar in PyUnit. I hope this gives you some ideas.
I'm looking for a complete example of using select for update in SQLAlchemy, but haven't found one googling. I need to lock a single row and update a column, the following code doesn't work (blocks forever):
s = table.select(table.c.user=="test",for_update=True)
# Do update or not depending on the row
u = table.update().where(table.c.user=="test")
u.execute(email="foo")
Do I need a commit? How do I do that? As far as I know you need to:
begin transaction
select ... for update
update
commit
If you are using the ORM, try the with_for_update function:
foo = session.query(Foo).filter(Foo.id==1234).with_for_update().one()
# this row is now locked
foo.name = 'bar'
session.add(foo)
session.commit()
# this row is now unlocked
Late answer, but maybe someone will find it useful.
First, you don't need to commit (at least not in-between queries, which I'm assuming you are asking about). Your second query hangs indefinitely, because you are effectively creating two concurrent connections to the database. First one is obtaining lock on selected records, then second one tries to modify locked records. So it can't work properly. (By the way in the example given you are not calling first query at all, so I'm assuming in your real tests you did something like s.execute() somewhere). So to the point—working implementation should look more like:
s = conn.execute(table.select(table.c.user=="test", for_update=True))
u = conn.execute(table.update().where(table.c.user=="test"), {"email": "foo"})
conn.commit()
Of course in such simple case there's no reason to do any locking but I guess it is example only and you were planning to add some additional logic between those two calls.
Yes, you do need to commit, which you can execute on the Engine or create a Transaction explicitely. Also the modifiers are specified in the values(...) method, and not execute:
>>> conn.execute(users.update().
... where(table.c.user=="test").
... values(email="foo")
... )
>>> my_engine.commit()