I find myself stuck on this problem, and repeated Googling, checking SO, and reading numerous docs has not helped me get the right answer, so I hope this isn't a bad question.
One entity I want to create is an event taking place during a convention. I'm giving it the property start_time = ndb.TimeProperty(). I also have a property date = messages.DateProperty(), and I'd like to keep the two discrete (in other words, not using DateTimeProperty).
When a user enters information to create an event, I want to specify defaults for any fields they do not enter at creation and I'd like to set the default time as midnight, but I can't seem to format it correctly so the service accepts it (constant 503 Service Unavailable response when I try it using the API explorer).
Right now I've set things up like this (some unnecessary details removed):
event_defaults = {...
...
"start_time": 0000,
...
}
and then I try looping over my default values to enter them into a dictionary which I'll use to .put() the info on the server.
data = {field.name: getattr(request, field.name) for field in request.all_fields()
for default in event_defaults:
if data[default] in (None, []):
data[default] = event_defaults[default]
setattr(request, default, event_defaults[default])
In the logs, I see the error Encountered unexpected error from ProtoRPC method implementation: BadValueError (Expected time, got 0). I have also tried using the time and datetime modules, but I must be using them incorrectly, because I still receive errors.
I suppose I could work around this problem by using ndb.StringProperty() instead, and just deal with strings, but then I'd feel like I would be missing out on a chance to learn more about how GAE and NDB work (all of this is for a project on udacity.com, so learning is certainly the point).
So, how can I structure my default time properly for midnight? Sorry for the wall of text.
Link to code on github. The conference.py file contains the code I'm having the trouble with, and models.py contains my definitions for the entities I'm working with.
Update: I'm a dummy. I had my model class using a TimeProperty() and the corresponding message class using a StringField(), but I was never making the proper conversion between expected types. That's why I could never seem to give it the right thing, but it expected two different things at different points in the code. Issue resolved.
TimeProperty expects a datetime.time value
import datetime
event_defaults = {...
...
"start_time": datetime.time(),
...
}
More in the docs: https://cloud.google.com/appengine/docs/python/ndb/entity-property-reference#Date_and_Time
Use the datetime() library to convert it into a valid ndb time property value
if data['time']:
data['time'] = datetime.strptime(data['time'][:10], "%H:%M").time()
else:
data['time'] = datetime.datetime.now().time()
ps: Don't forget to change data['time'] with your field name
Related
I'm attempting to build an XML document out of request.POST data in a Django app:
ElementTree.Element("occasion", text=request.POST["occasion"])
PyCharm is giving me an error on the text parameter saying Expected type 'str', got 'Type[QueryDict]' instead. I only bring up PyCharm because I know its type checker can be overzealous sometimes. However, I haven't been able to find anything about this issue specifically.
Am I doing something wrong? Or should I try to silence this error?
Assuming you're not posting anything unusual, like json, request.POST['occasion'] should return a string, either the field 'occasion' or the last value of the list submitted with that name (or an error, if empty. Use request.POST.get('occasion') to avoid).
There are apparently some httprequest related issues with pycharm, but the way to doublecheck if this is happening here would be to print out and/or type check request.POST['occasion'] prior to that line to make sure of what it returns, eg:
occasion = request.POST['occasion']
print(type(occasion), occasion)
ElementTree.Element("occasion", text=occasion)
In the last line, using a variable assigned ahead of time might be a simple way to remove the pycharm error without turning off warnings, depending on your tolerance for extra code.
Trying to automate reopening a bunch of issues on a kanban board in Gitlab does not seem to be working as expected. I can edit pretty much everything else: labels, title, description, due date and so on, but changing the state from 'closed' to 'opened' is not doing anything.
Funny enough the save() method, takes some time to complete and is not throwing any error (but it does when I try to update an issue without having changed anything)
import gitlab
gl = gitlab.Gitlab(URL, private_token=TOKEN, api_version = 4)
gl.auth()
Project = gl.projects.get(project_id, lazy=True)
editable_issue = Project.issues.get(issue_id)
I use this to instantiate the gitlab object
This works:
editable_issue.labels.append('some label')
editable_issue.save()
editable_issue.title = title + '\n' + 'edited.'
editable_issue.save()
But this doesn't:
editable_issue.state = 'opened'
editable_issue.save()
I have also tried changing several fields, like so:
editable_issue.state = 'opened'
editable_issue.closed_at = None
editable_issue.save()
But, albeit I receive no error, the task is not being updated.
Is there something I'm overlooking ?
edit
Oh yes, I forgot to mention that the documentation seems to be mentioning state_event as a method, while my object does not know of such a method.
Is this method only there when a projectissue is being created by the api, perhaps ?
As mentioned in the documentation, you must set the state_event attribute, then save. This attribute won't exist at first -- you must create it.
issue.state_event = 'reopen'
issue.save()
Modifying state or closed_at won't work because the issues edit API doesn't accept state or closed_at parameters.
(this is also why state_event doesn't exist on the object at first, because the issue detail API doesn't include state_event in its response -- all the attributes are set dynamically by the response JSON)
I am working on a machine learning modelling problem where an object is created to store training and validation data, but the validation set if optional and if not included when creating the object the default value is None.
If we find out later on though the user wants to add a validation pandas dataframe we were hoping to let them supply the name of the dataframe with input(). With a function defined right in the notebook we're running we can then do an eval(<input>) to turn the string into the object we need. If we define the object outside of our notebook though it seems like the scope doesn't include that variable.
I realize this probably isn't the best way to do this, so what is a more pythonic way to let a user supply a dataframe by name after an object as already been instantiated? We can pass the objects fine as arguments to functions. Is there a way to pass an object like that but with input() or some other user-friendly way to prompt the user?
It maybe possible to use locals() or globals() as a dict for grabbing an initialized variable by it's name.
the_variable = {'key_one': 'val_one'}
selected_input = input("Please input a variable name")
selected_var = locals()[selected_input]
print("selected_var continence -> {0}".format(selected_var))
Should output, well assuming the_variable was passed to input()
selected_var continence -> {'key_one': 'val_one'}
This is an adaptation of an answer to Calling a function of a module by using it's name a string, but seems to work in this instance too.
Update
I can't remember where I picked up the following perversion (I did look about though), and I'm not suggesting it's use in production. But...
questionable_response = lambda message: input("{message}: ".format(message = message))
this_response = json.loads(questionable_response("Input some JSON please"))
# <- '{"Bill": {"person": true}, "Ted": {"person": "Who is asking?"}}'
... does allow for object like inputting.
And getting data from an inputted json string could look like...
this_response['Bill']
# -> {u'person': True}
this_response['Ted'].get('person')
# -> u'Who is asking?'
... however, you'll likely see some issues with using above with other scripted components.
For the Unicode conversion there's some pre-posted answers on the subject. And checking help(json.loads) exposes that there's toggles for parse_ing floats, ints, and constants.
Even with that it may not be worth it, because there's still some oddities you'll run into if trying to implement this funkiness.
Just to list a few;
conjunctions are a no go; let's say ya get a clever Clara who inputs something like '{"Clara": {"person": "I'll not be labelled!"}}'. That would cause an error unless ' was escaped, eg. \'
the above is also quote fragile; perhaps someone at the keyboard hasn't had enough to drink and tries "{'Jain': {'person': True}}". That would first barf on quotes, then heave from True not being true
So like I prefaced at the start of this update, I'll not recommend this in production; could waist a lot of time chasing edge-cases. I only share it because maybe you've not found any other option for getting from input to something that can be interrogated like an object.
Similar to this question I want to extract the info of a cron job trigger from APScheduler.
However, I need the "day_of_week" field and not everything. Using
for job in scheduler.get_jobs():
for f in job.trigger.fields:
print(f.name + " " + str(f))
i can see all the fields, e.g. week,hour,day_of_week , but
job.trigger.day_of_week is seemingly 'not an attribute' of the "CronTrigger" object. I'm confused as to what kind of object this job.trigger is and how its fields are packed. I tried to read the code on github, but it is even more puzzling.
How do I extract only the one field day_of_week, and how is this trigger class structured?
Diving deeper I found that
apscheduler.triggers.cron.fields.DayOfWeekField
I can find by indexing the job.trigger.fields[4], which seems really bad style, since it depends on the 'position'of the field. What I get is this DayOfWeekField, from which comically I am not able to retrieve it's value either:
a.get_value
<bound method DayOfWeekField.get_value of DayOfWeekField('day_of_week', '1,2,3,4')>
The structure of the fields is coded here, but I don't know what to do with dateval, the argument of get_value().
Eventually, after hopefully understanding the concept, I want to do
if job-day_of_week contains mon
if job-day_of_week == '*'
print ( job-day_of_week )
I am grateful for any suggestions/hints!
Looking at the code, you should be able to get the day_of_week field without hardcoding the index by using the CronTrigger class's FIELD_NAMES property, e.g.
dow_index = CronTrigger.FIELD_NAMES.index('day_of_week')
dow = job.trigger.fields[dow_index]
Getting the value of the field is a bit more complicated, but it appears that BaseField implements the str function that should give you the value of the expression that created the field as a string that you could parse to find what you want:
dow_value_as_string = str(dow)
if 'mon' in dow_value_as_string:
# do something
if dow_value_as_string = "*":
# do something else
I'm pretty new to python, and currently playing with the zeroconf library.
when I try to register a service on the network, I'm seeing this in the function definition:
def register_service(self, info, ttl=_DNS_TTL):
"""Registers service information to the network with a default TTL
of 60 seconds. Zeroconf will then respond to requests for
information for that service. The name of the service may be
changed if needed to make it unique on the network."""
self.check_service(info)
self.services[info.name.lower()] = info
if info.type in self.servicetypes:
self.servicetypes[info.type] += 1
else:
self.servicetypes[info.type] = 1
now = current_time_millis()
next_time = now
i = 0
while i < 3:
if now < next_time:
self.wait(next_time - now)
now = current_time_millis()
continue
out = DNSOutgoing(_FLAGS_QR_RESPONSE | _FLAGS_AA)
out.add_answer_at_time(DNSPointer(info.type, _TYPE_PTR,
_CLASS_IN, ttl, info.name), 0)
out.add_answer_at_time(DNSService(info.name, _TYPE_SRV,
_CLASS_IN, ttl, info.priority, info.weight, info.port,
info.server), 0)
out.add_answer_at_time(DNSText(info.name, _TYPE_TXT, _CLASS_IN,
ttl, info.text), 0)
if info.address:
out.add_answer_at_time(DNSAddress(info.server, _TYPE_A,
_CLASS_IN, ttl, info.address), 0)
self.send(out)
i += 1
next_time += _REGISTER_TIME
Anyone know what type info is meant to be?
EDIT
Thanks for providing the answer that it's a ServiceInfo class. Besides the fact that the docstring provides this answer when one goes searching for it. I'm still unclear on:
the process expert python programmers follow when encountering this sort of situation - what steps to take to find the data type for info say when docstring wasn't available?
how does python interpreter know info is of ServiceInfo class when we don't specify the class type as part of the input param for register_service? How does it know info.type is a valid property, and say info.my_property isn't?
It is an instance of ServiceInfo class.
It can be deduced from reading the code and docstrings. register_service invokes check_service function which, I quote, "checks the network for a unique service name, modifying the ServiceInfo passed in if it is not unique".
It looks like it should be a ServiceInfo. Found in the examples of the repository:
https://github.com/jstasiak/python-zeroconf/blob/master/examples/registration.py
Edit
I'm not really sure what to say besides "any way I have to". In practice I can't really remember a time when the contract of the interface wasn't made perfectly clear, because that's just part of using Python. Documentation is more a requirement for this reason.
The short answer is, "it doesn't". Python uses the concept of "duck typing" in which any object that supports the necessary operations of the contract is valid. You could have given it any value that has all the properties the code uses and it wouldn't know the difference. So, per part 1, worst case you just have to trace every use of the object back as far as it is passed around and provide an object that meets all the requirements, and if you miss a piece, you'll get a runtime error for any code path that uses it.
My preference is for static typing as well. Largely I think documentation and unit tests just become "harder requirements" when working with dynamic typing since the compiler can't do any of that work for you.