I have a python script which imports a csv file into an Asana project, one task per row.
It works fine, except that it seems to create tasks in the opposite order from the web interface - ie: online, you create new tasks at the end of the list, but the API appears to create new tasks at the top.
Since the csv's are annoyingly structured (ie: sorted by presentation not by due date, broken into sections which I'm parsing at runtime, etc) the order does actually matter more than it ideally should.
So: is there an easy way to reverse the order so the API appends tasks at the end rather than inserting them at the top? Right now, I'm just reversing the csv when I import it, and it works okay, but it's ugly and inefficient and I wonder if I'm not missing something incredibly obvious :)
Any suggestions?
Tasks added to a project in the UI via the omni button, multi-homing, etc will also appear at the top of the project unless you are explicitly in the project its self, focused on the center pane task list and press "Enter" to add the new task. A bit confusing, I agree.
As for task ordering in the API, you are correct that new tasks added to a project will appear at the top of the project. You can alter the position of a task by calling POST /tasks/task-id/addProject and providing insert_before, insert_after, or section parameters with the ID of the task or section to move this task next to.
With the Asana python client the call looks like this:
`client.tasks.add_project(<TASK_ID>, { 'project' : <PROJECT_ID>, 'insert_after' : <PRECEDING_TASK> })`
You can also specify the memberships property of a task when creating in order to dictate the section a task should appear in.
To be honest, your current solution is probably the best, in the future we may allow you to specify the neighboring task upon creation. In that case you could specify the ID from each response in the insert_after of each create to get reverse order. I will add that to our list of requests.
Related
Just looking for a bit of a push in the right direction.
I'm creating a multi-step user sign up to receive a personalized assessment. But I'm having trouble trying to wrap my head around the multi-step progress within Django and a SQL point of view.
These are my steps, each to be a separate form:
Sign Up (Make an Account)
Complete Profile Information
Create an application
Complete required questions within application - This may need to be broken down even further.
I've figured out redirections from each form, but my biggest concern is saving which step they're up to. E.g. What if they close the browser at end of step 2, how do I have a way to prompt them to come and complete step 3 onwards.
To make it clear, I'm not looking for saving the state within the browser. Needs to be saved within the database to support access from other devices, for example.
To add onto it, what if I was to allow them to make multiple applications (steps 3 and 4), but only allowing one to be open at a time. E.g. One gets rejected but we allow them to reapply.
Just looking for some tips/suggestions on how to structure something like this. If anyone has any good websites that I can use as an example on how they achieved something similar, that would also be helpful.
I've been down the path of redirection madness and that created more work then it was worth, so I'm hoping for a more dynamic method.
Apologies for not having a pithier way to explain that...tl;dr the best real-world example of what I'm looking for is Slack's team-switching functionality, wrt going from one team to an entirely different one while remaining in the same window and saving session data.
In case you'd like more context: essentially, I have been working on a organizational/note-taking tool for myself, comprised of template data structures (Entities, Categories, Collections, and Tags). For example, in my bookmark manager, I would use the Entities to represent my bookmarks, the Category to group by documentation/tutorial/etc., the Collections to group by my projects/skills involved/etc. And then I'd like to be able to interact with my bookmark manager through an interface, organize based on whatever query I feel like (e.g. 'search by tag:data-science' or 'search collection:projects by category'), and even take notes within the app. I'd like to repeat this same process with a separate app that I would use to take notes/structure course material, and another one for organizing my Spotify playlists, and so on. It may sound a bit silly, but it's how I'd like to organize my bookmarks and lists, because the hierarchical/nested structure just doesn't cut it for me.
I'm thinking that I should deploy this in one package and just essentially create a bunch of factory apps, because it seems like I'll be reusing the same code over and over again since I plan to keep the same templates for the apps and data structures (e.g. I plan to extend a base Entity class with specific attributes depending on the app). And if I'm doing that, then I like the idea of having all of this in one repository, where I could switch between the separate apps via an entry point that behaves like Slack's does. But I'm not sure what the best way to do this is. I'm visualizing one central login that allows you to toggle between multiple apps depending on which one you're logging into, but maybe that's unrealistic/too grandiose. I'm reading into containerization a la Docker as well as application dispatching patterns but these seem like they work best for apps that are very different/independent or even meant to interact directly, whereas I expect my apps to share much of their templates/configuration. But perhaps I'm misinterpreting?
I would appreciate any nudge in the right direction, even a small one. :)
i have written MicroServices like for auth, location, etc.
All of microservices have different database, with for eg location is there in all my databases for these services.When in any of my project i need a location of user, it first looks in cache, if not found it hits the database. So far so good.Now when location is changed in any of my different databases, i need to update it in other databases as well as update my cache.
currently i made a model (called subscription) with url as its field, whenever a location is changed in any database, an object is created of this subscription. A periodic task is running which checks for subscription model, when it finds such objects it hits api of other services and updates location and updates the cache.
I am wondering if there is any better way to do this?
I am wondering if there is any better way to do this?
"better" is entirely subjective. if it meets your needs, it's fine.
something to consider, though: don't store the same information in more than one place.
if you need an address, look it up from the service that provides address, every time.
this may be a performance hit, but it eliminates the problem of replicating the data everywhere.
another option would be a more proactive approach, as suggested in comments.
instead of creating a task list for changes, and doing that periodically, send a message across rabbitmq immediately when the change happens. let every service that needs to know, get a copy of the message and update it's own cache of info.
just remember, though. every time you have more than one copy of the information, you reduce the "correctness" of the system, as a whole. it will always be possible for the information found in one of your apps to be out of date, because it did not get an update from the official source.
I'm sure a lot of services online today must perform a task similar to what I'm doing. A user has friends, and I want to get all status updates of all the user's friends after their friends last status update date.
That was a mouthful, but here's what I have:
A user has say 10 friends. What I want to do is get new status updates for all his friends. So, I prepare a dictionary with each friend's last status date. Something like:
for friend in user:
dictionary['userId] = friend.id
dictionary['lastDate'] = friend.mostRecentStatusUpdate.date
Then, on my server side, I do something like this:
for dict in friends:
userId = dict['userId]
lastDate = dict['lastDate']
# each get below, however, launches an RPC and does a separate table lookup, so if I have 100 friends, this seems extremely inefficient
get statusUpdates for userId where postDate > lastDate
The problem with the above approach is that on the server side each iteration of the for loop launches a new query, which launches an RPC. So if there are a lot of friends, it would seem to be really inefficient.
Is there a better way to design my structure to make this task more efficient? How does say Twitter do something like that, where it gets new time line updates?
From the high level, I'd suggest you follow the prescribed app-engine mantra - make writes expensive to make reads cheap.
For each friend, you should keep a collection of known friends and their last status updates. This will allow you to update friends at write time. This is expensive for the write, but saves you processing and querying at read. This also assumes that you read more than you write.
Additionally, if you are just trying to display N number of latest updates for each friend, I would suggest you use NDB Structured property to store the Friend objects - this way you can create matching data structure. As part of the object, create a collection of keys that correspond to the status updates. When the status update is written, add to the collection, and potentially remove older entries (if space is a concern).
This way when you need to retrieve the updates, you are getting them by key, instead of a more expensive query types.
An alternative to this that avoids any additional queries, is to keep the entire update instead of just keys. However, this will be a lot bigger for storage - 10 friends all interconnected, means 100 versions of the same update.
I'm trying to keep my RESTful site DRY, and I can't come up with a good way to factor out the code to dynamically select from each "user's" separate database. We've got a separate database for each client. This comes in as a part of the URL, and is passed into each view as a keyword arg. I want to give each and every view the behavior of accessing the corresponding database WITHOUT have to make sure each programmer writing a view remembers to use
Thing.objects.using(user).all()
and
t = Thing()
t.save(using=user)
every time. It seems like there ought to be some way to intercept the request and set the default database based on the args to the view before it hits the view, allowing us to use the usual
Thing.objects.all()
This would also have the advantage of factoring out all the user resolution code into a more appropriate place.
We do this by the following technique.
Apache picks off the first part of the path and routes this to a specific mod_wsgi Daemon.
Each mod_wsgi daemon is a different customer's installation.
We have many parallel customers, each with (nearly) identical code, all based off a single common installation of the base software.
Each customer has a separate settings.py with their unique configuration.
They don't (actually can't) know about each other because Apache has peeled off the top layer of the path for us.