Trying to automate reopening a bunch of issues on a kanban board in Gitlab does not seem to be working as expected. I can edit pretty much everything else: labels, title, description, due date and so on, but changing the state from 'closed' to 'opened' is not doing anything.
Funny enough the save() method, takes some time to complete and is not throwing any error (but it does when I try to update an issue without having changed anything)
import gitlab
gl = gitlab.Gitlab(URL, private_token=TOKEN, api_version = 4)
gl.auth()
Project = gl.projects.get(project_id, lazy=True)
editable_issue = Project.issues.get(issue_id)
I use this to instantiate the gitlab object
This works:
editable_issue.labels.append('some label')
editable_issue.save()
editable_issue.title = title + '\n' + 'edited.'
editable_issue.save()
But this doesn't:
editable_issue.state = 'opened'
editable_issue.save()
I have also tried changing several fields, like so:
editable_issue.state = 'opened'
editable_issue.closed_at = None
editable_issue.save()
But, albeit I receive no error, the task is not being updated.
Is there something I'm overlooking ?
edit
Oh yes, I forgot to mention that the documentation seems to be mentioning state_event as a method, while my object does not know of such a method.
Is this method only there when a projectissue is being created by the api, perhaps ?
As mentioned in the documentation, you must set the state_event attribute, then save. This attribute won't exist at first -- you must create it.
issue.state_event = 'reopen'
issue.save()
Modifying state or closed_at won't work because the issues edit API doesn't accept state or closed_at parameters.
(this is also why state_event doesn't exist on the object at first, because the issue detail API doesn't include state_event in its response -- all the attributes are set dynamically by the response JSON)
Related
Similar to this question I want to extract the info of a cron job trigger from APScheduler.
However, I need the "day_of_week" field and not everything. Using
for job in scheduler.get_jobs():
for f in job.trigger.fields:
print(f.name + " " + str(f))
i can see all the fields, e.g. week,hour,day_of_week , but
job.trigger.day_of_week is seemingly 'not an attribute' of the "CronTrigger" object. I'm confused as to what kind of object this job.trigger is and how its fields are packed. I tried to read the code on github, but it is even more puzzling.
How do I extract only the one field day_of_week, and how is this trigger class structured?
Diving deeper I found that
apscheduler.triggers.cron.fields.DayOfWeekField
I can find by indexing the job.trigger.fields[4], which seems really bad style, since it depends on the 'position'of the field. What I get is this DayOfWeekField, from which comically I am not able to retrieve it's value either:
a.get_value
<bound method DayOfWeekField.get_value of DayOfWeekField('day_of_week', '1,2,3,4')>
The structure of the fields is coded here, but I don't know what to do with dateval, the argument of get_value().
Eventually, after hopefully understanding the concept, I want to do
if job-day_of_week contains mon
if job-day_of_week == '*'
print ( job-day_of_week )
I am grateful for any suggestions/hints!
Looking at the code, you should be able to get the day_of_week field without hardcoding the index by using the CronTrigger class's FIELD_NAMES property, e.g.
dow_index = CronTrigger.FIELD_NAMES.index('day_of_week')
dow = job.trigger.fields[dow_index]
Getting the value of the field is a bit more complicated, but it appears that BaseField implements the str function that should give you the value of the expression that created the field as a string that you could parse to find what you want:
dow_value_as_string = str(dow)
if 'mon' in dow_value_as_string:
# do something
if dow_value_as_string = "*":
# do something else
I find myself stuck on this problem, and repeated Googling, checking SO, and reading numerous docs has not helped me get the right answer, so I hope this isn't a bad question.
One entity I want to create is an event taking place during a convention. I'm giving it the property start_time = ndb.TimeProperty(). I also have a property date = messages.DateProperty(), and I'd like to keep the two discrete (in other words, not using DateTimeProperty).
When a user enters information to create an event, I want to specify defaults for any fields they do not enter at creation and I'd like to set the default time as midnight, but I can't seem to format it correctly so the service accepts it (constant 503 Service Unavailable response when I try it using the API explorer).
Right now I've set things up like this (some unnecessary details removed):
event_defaults = {...
...
"start_time": 0000,
...
}
and then I try looping over my default values to enter them into a dictionary which I'll use to .put() the info on the server.
data = {field.name: getattr(request, field.name) for field in request.all_fields()
for default in event_defaults:
if data[default] in (None, []):
data[default] = event_defaults[default]
setattr(request, default, event_defaults[default])
In the logs, I see the error Encountered unexpected error from ProtoRPC method implementation: BadValueError (Expected time, got 0). I have also tried using the time and datetime modules, but I must be using them incorrectly, because I still receive errors.
I suppose I could work around this problem by using ndb.StringProperty() instead, and just deal with strings, but then I'd feel like I would be missing out on a chance to learn more about how GAE and NDB work (all of this is for a project on udacity.com, so learning is certainly the point).
So, how can I structure my default time properly for midnight? Sorry for the wall of text.
Link to code on github. The conference.py file contains the code I'm having the trouble with, and models.py contains my definitions for the entities I'm working with.
Update: I'm a dummy. I had my model class using a TimeProperty() and the corresponding message class using a StringField(), but I was never making the proper conversion between expected types. That's why I could never seem to give it the right thing, but it expected two different things at different points in the code. Issue resolved.
TimeProperty expects a datetime.time value
import datetime
event_defaults = {...
...
"start_time": datetime.time(),
...
}
More in the docs: https://cloud.google.com/appengine/docs/python/ndb/entity-property-reference#Date_and_Time
Use the datetime() library to convert it into a valid ndb time property value
if data['time']:
data['time'] = datetime.strptime(data['time'][:10], "%H:%M").time()
else:
data['time'] = datetime.datetime.now().time()
ps: Don't forget to change data['time'] with your field name
This is something I have been trying to solve for 3 days now and I just can't get my head around it why this is not working.
I have a method that creates a new version of an Object. It used to work, that you would pass in the sou obj. and this would be the source from which a new version is created. You can also pass in a destination, which is not really important in this example. Now I wanted to add locking to this method as we want to add multiple users. So I want to be sure that I always have the most current object from which I create a new one. So I added a line that would just get the newest object. If there is no newer object in the database it would be the same anyway.
def createRevision(request, what, sou, destination=None, ignore = [], **args):
...
if "initial" not in args.keys():
source = get_object_or_404(BaseItem, ppk=sou.ppk, project=sou.project, current=True)
print "------------"
print source == sou
print "------------"
# This outputs True
else:
source = sou
further down in the method I do something like
source.current = False
source.save()
Basically the idea is that I pass in BaseItem and if I don't specify the "initial" keyword then I get the current item from that project with the same ppk (Which is a special random pk in conduction with current). I do this just to be on the save side, that I really have the most current object. And if it is the initial version I just use that one, as there can not be another version.
So now the problem is, that everything works fine if I use sou in this method. I can save it etc .. but as soon as I use source and initial is not in the args it just doesn't save it. The print statement tells me they are the same. Everything I print after the save tells me it has been saved but it just doesn't do it.
source.current = False
source.save()
print "SAVED !!!!"
print source.pk
print source.current
rofl = get_object_or_404(BaseItem, pk=source.pk, project=sou.project)
print rofl.pk
print source.current
outputs the same pk and the same current value but somehow it is not properly saved. As soon as I look into django admin or do a select current = True.
I really don't know what to do anymore.
Why does it work without a problem if I pass in the object into the method but starts to fail when I get the exact same object in the method?
Of course I call the method with the same object:
x = get_object_or_404(BaseItem, ppk=sou.ppk, project=sou.project, current=True)
createRevision(request, "", x)
Thank you pztrick for the hint with the caches. I finally solved it. So the problem was that I was doing:
x = get_object_or_404(BaseItem, ppk=sou.ppk, project=sou.project, current=True)
createRevision(request, "", x)
# .... loads of lines of code
unlock(x)
unlock is a method I wrote that just sets a timestamp so I know no other user is editing it. So now the problem was that I was saving x in createRevision with all the correct data but of course unlock(x) still had a reference to an "old" not updated object and of course was saving it again. Hence it was overwriting my changes in createRevision.
Thank you again to everyone who helped with this.
I think you may be running afoul of model manager caching which is intended to limit database queries. However, by invoking the .all() method on the model manager you force it to hit the databse again.
So, try this: Replace your argument from the BaseItem class to the model manager's .all() QuerySet:
source = get_object_or_404(BaseItem.objects.all(), ppk=sou.ppk, project=sou.project, current=True)
# ...
rofl = get_object_or_404(BaseItem.objects.all(), pk=source.pk, project=sou.project)
get_object_or_404 supports mode classes, model managers, or QuerySets as the first parameter so this is valid.
I have quite an obvious piece of code that fails:
temp = MyModel(
required_field1 = AnotherModel.objects.filter(name="example1")[0],
required_field2 = YetAnotherModel.objects.filter(name="example2")[0],
)
The problem is that after that, temp is set to None! I have no traceback, no error message - it just doesn't work and leaves None. required_fieldNs (for N=1|2) are the only mandatory fields in MyModel. Objects of AnotherModel and YetAnotherModel exist. Does anyone have any idea why it doesn't work as I'd like to (I mean it does not construct a new object, referenced by temp). I can't paste all my actual code in here, because it is a corporate project, but if in doubt - please ask and I can explain probably something more.
EDIT:
OK I figured out why it fails, the problem was that I was trying to invoke a method from that newly constructed object and it caused it to crash in this strange way. This topic can now be closed.
Unless required_field1 and required_field2 are foreign keys, the above code won't work.
Are you sure you didn't mean:
temp = MyModel(
required_field1 = unicode(AnotherModel.objects.filter(name="example1")[0]),
required_field2 = unicode(YetAnotherModel.objects.filter(name="example2")[0]),
)
Or:
temp = MyModel(
required_field1 = AnotherModel.objects.filter(name="example1")[0].some_field,
required_field2 = YetAnotherModel.objects.filter(name="example2")[0].some_field,
)
I have a queryset that I need to pickle lazily and I am having some serious troubles. cPickle.dumps(queryset.query) throws the following error:
Can't pickle <class 'myproject.myapp.models.myfile.QuerySet'>: it's not the same object as myproject.myapp.models.myfile.QuerySet
Strangely (or perhaps not so strangely), I only get that error when I call cPcikle from another method or a view, but not when I call it from the command line.
I made the method below after reading PicklingError: Can't pickle <class 'decimal.Decimal'>: it's not the same object as decimal.Decimal and Django mod_wsgi PicklingError while saving object:
def dump_queryset(queryset, model):
from segment.segmentengine.models.segment import QuerySet
memo = {}
new_queryset = deepcopy(queryset, memo)
memo = {}
new_query = deepcopy(new_queryset.query, memo)
queryset = QuerySet(model=model, query=new_query)
return cPickle.dumps(queryset.query)
As you can see, I am getting extremely desperate -- that method still yields the same error. Is there a known, non-hacky solution to this problem?
EDIT: Tried using --noreload running on the django development server, but to no avail.
EDIT2: I had a typo in the error I displayed above -- it was models.QuerySet, not models.mymodel.QuerySet that it was complaining about. There is another nuance here, which is that my models file is broken out into multiple modules, so the error is ACTUALLY:
Can't pickle <class 'myproject.myapp.models.myfile.QuerySet'>: it's not the same object as myproject.myapp.models.myfile.QuerySet
Where myfile is one of the modules under models. I have an __ini__.py in models with the following line:
from myfile import *
I wonder if this is contributing to my issue. Is there some way to change my init to protect myself against this? Are there any other tests to try?
EDIT3: Here is a little more background on my use case: I have a model called Context that I use to populate a UI element with instances of mymodel. The user can add/remove/manipulate the objects on the UI side, changing their context, and when they return, they can keep their changes, because the context serialized everything. A context has a generic foreign key to different types of filters/ways the user can manipulate the object, all of which must implement a few methods that the context uses to figure out what it should display. One such filter takes a queryset that can be passed in and displays all of the objects in that queryset. This provides a way to pass in arbitrary querysets that are produced elsewhere and have them displayed in the UI element. The model that uses the Context is hierarchical (using mptt for this), and the UI element makes a request to get children each time the user clicks around, we can then take the children and determine if they should be displayed based on whether or not they are included in the Context. Hope that helps!
EDIT4: I am able to dump an empty queryset, but as soon as I add anything of value, it fails.
EDIT4: I am on Django 1.2.3
This may not be the case for everyone, but I was using Ipython notebook and having a similar issue pickling my own class.
The problem turned out to be from a reload call
from dir.my_module import my_class
reload(dir.my_module)
Removing the reload call and then re-running the import and the cell where the instance of that object was created then allowed it to be pickled.
not so elegant but perhaps it works:
add the directory of the myfile -module to os.sys.path and use only import myfile in each module where you use myfile. (remove any from segment.segmentengine.models.segment import, anywhere in your project)
According to this doc, pickling a QuerySet should be not a problem. Thus, the problem should come from other place.
Since you mentined:
EDIT2: I had a typo in the error I displayed above -- it was models.QuerySet, not models.mymodel.QuerySet that it was complaining about. There is another nuance here, which is that my models file is broken out into multiple modules, so the error is ACTUALLY:
The second error message you provided look like the same as previous one, is that what you mean?
The error message you provided looks weird. Since you are pickling "queryset.query", the error should related to the django.db.models.sql.Query class instead of the QuerySet class.
Some modules or classes may have the same name. They will override each other then cause this kind of issue. To make thing easier, I will recommend you to use "import ooo.xxx" instead of "from ooo import *".
Your could also try
import ooo.xxx as othername