In our DynamoDB database, we have table where we usually have thousands of items that are junk because of test_data and we clean it up once awhile.
But there is a specific item that we don't want to delete but when we do select all, that one gets deleted as well.
Is there a way in the table, where we define the ID and stop it from getting deleted? Or if someone comes and wants to delete all, it will delete everything except that one?
I can think of two options:
Add a policy, to anyone (or any role) who might perform this action, that denies permission to delete that item. You can accomplish this by Specifying Conditions: Using Condition Keys using the dynamodb:LeadingKeys condition key.
Add a stream handler to your table and any time the record is deleted you can automatically add it back.
The first option is probably best, but you would need to be sure it's always attached to the appropriate users/roles. You also need to be sure you are handling the error you're going to get when you try to delete the record you aren't allowed to delete.
The second option removes the need to worry about it but it comes with the overhead of a Lambda running everytime you create, update, or delete a record in the table (with some batching, so not EVERY change). It also opens up a brief period where the record will be deleted, so if it's important that the record NEVER be deleted then this isn't a viable option.
Related
Using Python and the built-in Sqlite:
When I use a prepared statement, not knowing what columns the user will want to update:
UPDATE_AUTHOR = """UPDATE lastName=?, firstName=?, age=?, nationality=? FROM authors
WHERE _id = ?
"""
How can I replace the '?' with some value that will keep the current value for some of the columns?
From the user perspective, I will tell him for example to press 'Enter' if he wishes to keep the current value. So for instance, he presses 'Enter' on lastName, updates firstName, updates age, and presses 'Enter' on nationality. And I replace his 'Enter' with, hopefully, a value that will keep the current value.
Is that possible? If not, how can I solve this problem differently but efficiently?
I thought about building the prepared statement dynamically, in the above example: adding firstName=?, and age=?, after "UPDATE, and then the rest of the statement FROM authors WHERE _id = ?". But this seems less comfortable and less organized.
There are 2 ways of handling this question. One is to build a specific UPDATE query containing only the fields that will change. As you have said it is less comfortable because the query and the parameter list have to be tweaked.
Another way it to consistently update all the parameters, but keep the saved values for those which should not change. This is a common design in user interfaces:
the user is presented all the values for an object and can change some of them
if they confirm their choice, the application retrieves all the values, either changed or not and uses them in an UPDATE query.
Anyway, it is common the read the all the values before changing some, so it is not necessarily expensive. And at the database level, changing one or more values in an update has generally almost the same cost: a record is loaded from disk (or cache), some values are updated which is the cheapest operation, and it is then written back to disk. Even with the database caches, the most expensive part in the databases I know is to load and save the record.
The only way I can find of adding new data to a TinyDB table is with the table.insert() method. However this appends the entry to the end of the table, but I would like to maintain the sequence of entries and sometimes I need to insert into an arbitrary index in the middle of the table. Is there no way to do this?
There is no way to do what you are asking. Normally, the default index created tracks insertion order. When you add data, it will go at the end. If you need to maintain a certain order, you could create a new property the handle that case, and retrieve with a sort on that property.
If you truly want to insert in a specific id, you would need to add some logic to cascade the documents down. The logic would flow as:
Insert a new record which is equal to the last record.
Then, go backwards and cascade the records to the new open location
Stop when you get to the location you need, and update the record with what you want to insert by using the ID.
The performance would drag since you are having to shift the records down. There are other ways to maintain the list - it would be similar to inserting a record in the middle of an array. Similar methods would ally here. Good Luck!
First off, this is my first project using SQLAlchemy, so I'm still fairly new.
I am making a system to work with GTFS data. I have a back end that seems to be able to query the data quite efficiently.
What I am trying to do though is allow for the GTFS files to update the database with new data. The problem that I am hitting is pretty obvious, if the data I'm trying to insert is already in the database, we have a conflict on the uniqueness of the primary keys.
For Efficiency reasons, I decided to use the following code for insertions, where model is the model object I would like to insert the data into, and data is a precomputed, cleaned list of dictionaries to insert.
for chunk in [data[i:i+chunk_size] for i in xrange(0, len(data), chunk_size)]:
engine.execute(model.__table__.insert(),chunk)
There are two solutions that come to mind.
I find a way to do the insert, such that if there is a collision, we don't care, and don't fail. I believe that the code above is using the TableClause, so I checked there first, hoping to find a suitable replacement, or flag, with no luck.
Before we perform the cleaning of the data, we get the list of primary key values, and if a given element matches on the primary keys, we skip cleaning and inserting the value. I found that I was able to get the PrimaryKeyConstraint from Table.primary_key, but I can't seem to get the Columns out, or find a way to query for only specific columns (in my case, the Primary Keys).
Either should be sufficient, if I can find a way to do it.
After looking into both of these for the last few hours, I can't seem to find either. I was hoping that someone might have done this previously, and point me in the right direction.
Thanks in advance for your help!
Update 1: There is a 3rd option I failed to mention above. That is to purge all the data from the database, and reinsert it. I would prefer not to do this, as even with small GTFS files, there are easily hundreds of thousands of elements to insert, and this seems to take about half an hour to perform, which means if this makes it to production, lots of downtime for updates.
With SQLAlchemy, you simply create a new instance of the model class, and merge it into the current session. SQLAlchemy will detect if it already knows about this object (from cache or the database) and will add a new row to the database if needed.
newentry = model(chunk)
session.merge(newentry)
Also see this question for context: Fastest way to insert object if it doesn't exist with SQLAlchemy
The Google_App-Engine erases everything in my table using the put statement. I don't want it to do that, it makes for more code to have to re.put everything back in the table, every time something is added.
Basically the issue is that the put statement erases everything. is there a way to save what I don't want to update?
here is the code: ((python web2py))
biography2 = bayside(key_name='bayside', Biography=form_biography.vars.one)
biography2.put()
redirect(URL("b1", "bayside"))
the put statement, will update the biography under the table bayside, but it erases everything else in that table (genre, songs, etc...) I want it to keep the other table elements and only update the biography. Is that possible? Right now I have had to resort to hack that updates all table elements when I really just want to update one. it is very frustrating, and makes for a ton of extra code.
You need to get the entity from the datastore first. Then, you can modify the entity and put it back into the datastore.
to me it looks like you are overwriting an existing entity instead of getting and updating properties of an existing one.
you should take a look at the docs.
https://developers.google.com/appengine/docs/python/datastore/entities#Updating_an_Entity
I'm trying to use SQLAlchemy to implement a basic users-groups model where users can have multiple groups and groups can have multiple users.
When a group becomes empty, I want the group to be deleted, (along with other things associated with the group. Fortunately, SQLAlchemy's cascade works fine with these more simple situations).
The problem is that cascade='all, delete-orphan' doesn't do exactly what I want; instead of deleting the group when the group becomes empty, it deletes the group when any member leaves the group.
Adding triggers to the database works fine for deleting a group when it becomes empty, except that triggers seem to bypass SQLAlchemy's cascade processing so things associated with the group don't get deleted.
What is the best way to delete a group when all of its members leave and have this deletion cascade to related entities.
I understand that I could do this manually by finding every place in my code where a user can leave a group and then doing the same thing as the trigger however, I'm afraid that I would miss places in the code (and I'm lazy).
The way I've generally handled this is to have a function on your user or group called leave_group. When you want a user to leave a group, you call that function, and you can add any side effects you want into there. In the long term, this makes it easier to add more and more side effects. (For example when you want to check that someone is allowed to leave a group).
I think you want cascade='save, update, merge, expunge, refresh, delete-orphan'. This will prevent the "delete" cascade (which you get from "all") but maintain the "delete-orphan", which is what you're looking for, I think (delete when there are no more parents).
I had the same problem about 3 months ago, i have a Post/Tags relation and wanted to delete unused Tags. I asked on irc and SA's author told me that cascades on many-to-many relations are not supported, which kind of makes sense since there is no "parent" in many-to-many.
But extending SA is easy, you can probably use a AttributeExtension to check if the group became empty when is removed from a User and delete it from there.
Could you post a sample of your table and mapper set up? It might be easier to spot what is going on.
Without seeing the code it is hard to tell, but perhaps there is something wrong with the direction of the relationship?