How do you test your ncurses app in Python? - python

We've built cli-app with Python. Some part need ncurses, so we use
npyscreen. We've successfully tested most part of app using pytest
(with the help of mock and other things). But we stuck in 'how to test
the part of ncurses code'
Take this part of our ncurses code that prompt user to answer:
"""
Generate text user interface:
example :
fields = [
{"type": "TitleText", "name": "Name", "key": "name"},
{"type": "TitlePassword", "name": "Password", "key": "password"},
{"type": "TitleSelectOne", "name": "Role",
"key": "role", "values": ["admin", "user"]},
]
form = form_generator("Form Foo", fields)
print(form["role"].value[0])
print(form["name"].value)
"""
def form_generator(form_title, fields):
def myFunction(*args):
form = npyscreen.Form(name=form_title)
result = {}
for field in fields:
t = field["type"]
k = field["key"]
del field["type"]
del field["key"]
result[k] = form.add(getattr(npyscreen, t), **field)
form.edit()
return result
return npyscreen.wrapper_basic(myFunction)
We have tried many ways, but failed:
stringIO to capture the output: failed
redirect the output to file: failed
hecate: failed
I think it's only work if we run whole program
pyautogui
I think it's only work if we run whole program
This is the complete steps of what I have
tried
So the last thing I use is to use patch. I patch those
functions. But the cons is the statements inside those functions are
remain untested. Cause it just assert the hard-coded return value.
I find npyscreen docs
for writing test. But I don't completely understand. There is just one example.
Thank you in advance.

I don't see it mentioned in the python docs, but you can use the screen-dump feature of the curses library to capture information for analysis.

Related

Alexa: How to assign a slot response to a variable. (Python)

I'm not sure how to assign a slot to a variable in an Alexa skill. I've found several tutorials, but most of them are in JS (I wrote this code in Python) or outdated, since even using them precisely as presented does not work.
Alexa is meant to ask for one of my kids' names so I can implement a personalized answer, but I can't find a way to make use of the name once she gets it.
Here's how I call it in my code (look at variable "kids_name" in particular):
#sb.request_handler(can_handle_func=lambda input:
currently_playing(input) and
is_intent_name("ChoresIntent")(input))
def chores_intent_handler(handler_input):
session_attr = handler_input.attributes_manager.session_attributes
original_date = date(2022, 4, 8)
today = date.today()
diff = (today - original_date).days
kids_name = handler_input.request_envelope.request.intent.slots.name.value
mod = "Error. No input."
That seemed to be how they set the variable in the most recent tutorial I found, but it absolutely will not run for me. I've watched tutorials on pulling data from JSON files, but none of their answers look anything like this.
As I understand it, I construct the path from JSON, but I don't understand the syntax. Here's the JSON for my skill. I would really appreciate some clarification on how to transfer handler answers from one to the other. While I think I get the basic structure of the dictionaries, the methods I see for accessing them is very confusing for me.
{
"name": "ChoresIntent",
"slots": [
{
"name": "childname",
"type": "childname"
}
],
"samples": [
"what are {childname} s jobs today",
"what are my jobs today",
"can you tell me {childname} s chores",
]
}
],
"types": [
{
"name": "childname",
"values": [
{
"name": {
"value": "Matthew",
"synonyms": [
"Mattie"
]
}
},
Thank you in advance! I really appreciate the help I get on here.
You have 2 options
Without ask_sdk:
kids_name = handler_input.request_envelope.request.intent.slots["name"].value
With ask_sdk:
kids_name = ask_utils.request_util.get_slot_value(handler_input, "name")
# You can also get the slot and then set the value:
slotName = ask_utils.request_util.get_slot(handler_input, "name")
kids_name = slotName.value

Find all unique values for field in Elasticsearch through python

I've been scouring the web for some good python documentation for Elasticsearch. I've got a query term that I know returns the information I need, but I'm struggling to convert the raw string into something Python can interpret.
This will return a list of all unique 'VALUE's in the dataset.
{"find": "terms", "field": "hierarchy1.hierarchy2.VALUE"}
Which I have taken from a dashboarding tool which accesses this data.
But I don't seem to be able to convert this into correct python.
I've tried this:
body_test = {"find": "terms", "field": "hierarchy1.hierarchy2.VALUE"}
es = Elasticsearch(SETUP CONNECTION)
es.search(
index="INDEX_NAME",
body = body_test
)
but it doesn't like the find value. I can't find anything in the documentation about find.
RequestError: RequestError(400, 'parsing_exception', 'Unknown key for
a VALUE_STRING in [find].')
The only way I've got it to slightly work is with
es_search = (
Search(
using=es,
index=db_index
).source(['hierarchy1.hierarchy2.VALUE'])
)
But I think this is pulling the entire dataset and then filtering (which I obviously don't want to be doing each time I run this code). This needs to be done through python and so I cannot simply POST the query I know works.
I am completely new to ES and so this is all a little confusing. Thanks in advance!
So it turns out that the find in this case was specific to Grafana (the dashboarding tool I took the query from.
In the end I used this site and used the code from there. It's a LOT more complicated than I thought it was going to be. But it works very quickly and doesn't put a strain on the database (which my alternative method was doing).
In case the link dies in future years, here's the code I used:
from elasticsearch import Elasticsearch
es = Elasticsearch()
def iterate_distinct_field(es, fieldname, pagesize=250, **kwargs):
"""
Helper to get all distinct values from ElasticSearch
(ordered by number of occurrences)
"""
compositeQuery = {
"size": pagesize,
"sources": [{
fieldname: {
"terms": {
"field": fieldname
}
}
}
]
}
# Iterate over pages
while True:
result = es.search(**kwargs, body={
"aggs": {
"values": {
"composite": compositeQuery
}
}
})
# Yield each bucket
for aggregation in result["aggregations"]["values"]["buckets"]:
yield aggregation
# Set "after" field
if "after_key" in result["aggregations"]["values"]:
compositeQuery["after"] = \
result["aggregations"]["values"]["after_key"]
else: # Finished!
break
# Usage example
for result in iterate_distinct_field(es, fieldname="pattern.keyword", index="strings"):
print(result) # e.g. {'key': {'pattern': 'mypattern'}, 'doc_count': 315}

How can you monkeypatch an input call from a nested function?

I am a novice python coder on my greatest of days. I am in a class and using pytest to wrap my head around TDD. Some of the functions in this code (based off of Dane Hillard's Bark) calls a function that prompts the user for input. I need to automate the input of the nested function.
def get_user_input(label, required=True):
value = input(f"{label}: ") or None
while required and not value:
value = input(f"{label}: ") or None
return value
def get_new_bookmark_data():
return {
"title": get_user_input("Title"),
"url": get_user_input("URL"),
"notes": get_user_input("Notes", required=False),
}
I can't even wrap my head around how I should deal with test_get_user_input() much less the bookmark.
Here are some of the things I have tried:
def test_get_user_input(monkeypatch):
uInput = 'a'
monkeypatch.setattr('sys.stdin', uInput)
assert barky.get_user_input() == 'A'
I'm just not there yet. Here is my repo: https://github.com/impilcature/Green-CIDM6330
Mocking is done the same way in pytest as in unittest by using unittest.mock.patch, in your case by mocking the input function that lives in buildins. So you can just do:
from unittest.mock import patch
#patch("builtins.input")
def test_get_user_input(mocked):
mocked.return_value = "a"
assert get_user_input('foo') == 'a'
That mocks input regardless of where it is called, so it doesn't matter if you call it in the tested function or in a function called by the tested function.
To be more consistent with the pytest way, you can install the pytest plugin pytest-mock which provides the mocker fixture, which is a wrapper around unittest.mock:
def test_get_user_input(mocker):
mocker.patch('builtins.input', return_value='a')
assert get_user_input('foo') == 'a'
If you want to test more than one input, you can use side_effect to provide subsequent inputs (here using the mocker fixture):
def test_get_new_bookmark_data(mocker):
mocker.patch('builtins.input',
side_effect=["Some title", "Some URL", "Some notes"])
assert get_new_bookmark_data() == {
"title": "Some title",
"url": "Some URL",
"notes": "Some notes"
}

pymongo db.command exclude fields / projection

I am trying to write a call to db.command() using PyMongo that performs a geoNear search and I would like to exclude fields. The documentation for db.runCommand on the Mongo site and the PyMongo documentation both do not explain how one can accomplish this.
I understand how to do this using db.collection.find():
response = collection.find_one(
filter = {"PostalCode": postal_code},
projection = {'_id': False}
)
However, I cannot find any example anywhere of how to accomplish this when performing a geoNear search utilizing db.command():
params = {
"near": {
"type": "Point",
"coordinates": [longitude, latitude]
},
"spherical": True,
"limit": 1,
}
response = self.db.command("geoNear", value=self._collection_name, **params)
Can anyone provide insight into how one excludes fields when using db.command?
The geoNear command does not have a "projection" feature. It always returns entire documents. See the geoNear command reference for its options:
https://docs.mongodb.com/manual/reference/command/geoNear/

Updating a custom field using ASANA Python API

I'm trying to update the values of custom fields in my Asana list. I'm using the Official Python client library for the Asana API v1.
My code currently looks like this;
project = "Example Project"
keyword = "Example Task"
print "Logging into ASANA"
api_key = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
client = asana.Client.basic_auth(api_key)
me = client.users.me()
all_projects = next(workspace for workspace in me['workspaces'])
projects = client.projects.find_by_workspace(all_projects['id'])
for project in projects:
if 'Example Project' not in project['name']:
continue
print "Project found."
print "\t"+project['name']
print
tasks = client.tasks.find_by_project(project['id'], {"opt_fields":"this.name,custom_fields"}, iterator_type=None)
for task in tasks:
if keyword in task['name']:
print "Task found:"
print "\t"+str(task)
print
for custom_field in task['custom_fields']:
custom_field['text_value'] = "New Data!"
print client.tasks.update(task['id'], {'data':task})
But when I run the code, the task doesn't update. The return of print client.tasks.update returns all the details of the task, but the custom field has not been updated.
I think the problem is that our API is not symmetrical with respect to custom fields... which I kind of find to be a bummer; it can be a real gotcha in cases like this. Rather than being able to set the value of a custom field within the block of values as you're doing above, which is intuitive, you have to set them with a key:value dictionary-like setup of custom_field_id:new_value - not as intuitive, unfortunately. So above, where you have
for custom_field in task['custom_fields']:
custom_field['text_value'] = "New Data!"
I think you'd have to do something like this:
new_custom_fields = {}
for custom_field in task['custom_fields']:
new_custom_fields[custom_field['id']] = "New Data!"
task['custom_fields'] = new_custom_fields
The goal is to generate JSON for the POST request that looks something like
{
"data": {
"custom_fields":{
"12345678":"New Data!"
}
}
}
As a further note, the value should be the new text string if you have a text custom field, a number if it's a number custom field, and the ID of the enum_options choice (take a look at the third example under this header on our documentation site) if it's an enum custom field.
Thanks to Matt, I got to the solution.
new_custom_fields = {}
for custom_field in task['custom_fields']:
new_custom_fields[custom_field['id']] = "New Data!"
print client.tasks.update(task['id'], {'custom_fields':new_custom_fields})
There were two problems in my original code, the first was that I was trying to treat the API symmetrically and this was identified and solved by Matt. The second was that I was trying to update in an incorrect format. Note the difference between client.tasks.update in my original and updated code.

Categories

Resources