I am trying to build a Google Vision AI product search system. I am using python.
I have uploaded a product set already.
However, when I would like to search the product set with python argparse using below python code, I got an error.
https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/vision/cloud-client/product_search/product_set_management.py
The case was when I typed :
python productsetmanagement.py --project_id="test-as-vision-api-v1" --location="asia-east1" list_product_sets
I could find my product_set : set01
However, when I typed :
python productsetmanagement.py --project_id="test-as-vision-api-v1" --location="asia-east1" --product_set_id="set01" get_product_set
I got an error: the following arguments are required: product_set_id
I have already typed the product set id, may I know why I still got the error? Did I use argparse incorrectly?
Many thanks.
The product_set_id is a positional argument. You would call it like this:
python productsetmanagement.py --project_id="test-as-vision-api-v1" --location="asia-east1" get_product_set "set01"
Related
I've been the uszipcode python package to query demographic data by zipcode and having trouble limiting my search results to only Standard type zipcodes.
from uszipcode import SearchEngine, ZipcodeTypeEnum
search = SearchEngine(simple_or_comprehensive=SearchEngine.SimpleOrComprehensiveArgEnum.comprehensive)
ziplist= {}
for i in range(91320,91329):
zippy = i
try: ziplist[zippy] = search.by_zipcode(zipcode=zippy, zipcode_type="Standard").to_dict()
except: pass
The code works fine if I use only the zipcode parameter, but returns data for all zipcode types (which I don't want). However, if I add the zipcode_type parameter, no data is returned. I've tried setting to parameter to "Standard", "STANDARD", and ZipcodeTypeEnum.Standard as the documentation suggested, but nothing seems to work. I've been stuck on this for a while and I'm hoping can enlighten me on what I am doing wrong here? Thanks in advance.
https://uszipcode.readthedocs.io/01-Usage-Example/index.html
https://uszipcode.readthedocs.io/uszipcode/search.html#by_zipcode
I want to take a file of one or more bibtex entries and output it as an html-formatted string. The specific style is not so important, but let's just say APA. Basically, I want the functionality of bibtex2html but with a Python API since I'm working in Django. A few people have asked similar questions here and here. I also found someone who provided a possible solution here.
The first issue I'm having is pretty basic, which is that I can't even get the above solutions to run. I keep getting errors similar to ModuleNotFoundError: No module named 'pybtex.database'; 'pybtex' is not a package. I definitely have pybtex installed and can make basic API calls in the shell no problem, but whenever I try to import pybtex.database.whatever or pybtex.plugin I keep getting ModuleNotFound errors. Is it maybe a python 2 vs python 3 thing? I'm using the latter.
The second issue is that I'm having trouble understanding the pybtex python API documentation. Specifically, from what I can tell it looks like the format_from_string and format_from_file calls are designed specifically for what I want to do, but I can't seem to get the syntax correct. Specifically, when I do
pybtex.format_from_file('foo.bib',style='html')
I get pybtex.plugin.PluginNotFound: plugin pybtex.style.formatting.html not found. I think I'm just not understanding how the call is supposed to work, and I can't find any examples of how to do it properly.
Here's a function I wrote for a similar use case--incorporating bibliographies into a website generated by Pelican.
from pybtex.plugin import find_plugin
from pybtex.database import parse_string
APA = find_plugin('pybtex.style.formatting', 'apa')()
HTML = find_plugin('pybtex.backends', 'html')()
def bib2html(bibliography, exclude_fields=None):
exclude_fields = exclude_fields or []
if exclude_fields:
bibliography = parse_string(bibliography.to_string('bibtex'), 'bibtex')
for entry in bibliography.entries.values():
for ef in exclude_fields:
if ef in entry.fields.__dict__['_dict']:
del entry.fields.__dict__['_dict'][ef]
formattedBib = APA.format_bibliography(bibliography)
return "<br>".join(entry.text.render(HTML) for entry in formattedBib)
Make sure you've installed the following:
pybtex==0.22.2
pybtex-apa-style==1.3
I have been using the ElasticSearch DSL python package to query my elastic search database. The querying method is very intuitive but I'm having issues retrieving the documents. This is what I have tried:
from elasticsearch import Elasticsearch
from elasticsearch_dsl import Search
es = Elasticsearch(hosts=[{"host":'xyz', "port":9200}],timeout=400)
s = Search(using=es,index ="xyz-*").query("match_all")
response = s.execute()
for hit in response:
print hit.title
The error I get :
AttributeError: 'Hit' object has no attribute 'title'
I googled the error and found another SO : How to access the response object using elasticsearch DSL for python
The solution mentions:
for hit in response:
print hit.doc.firstColumnName
Unfortunately, I had the same issue again with 'doc'. I was wondering what the correct way to access my document was?
Any help would really be appreciated!
I'm running into the same issues as I've found different versions of this, but it seems to depend on the version of the elasticsearch-dsl library you're using. You might explore the response object, and it's sub-objects. For instance, using version 5.3.0, I see the expected data using the below loop.
for hit in RESPONSE.hits._l_:
print(hit)
or
for hit in RESPONSE.hits.hits:
print(hit)
NOTE these are limited to 10 data elements for some strange reason.
print(len(RESPONSE.hits.hits))
10
print(len(RESPONSE.hits._l_))
10
This doesn't match the amount of overall hits if I print the number of hits using print('Total {} hits found.\n'.format(RESPONSE.hits.total))
Good luck!
From version 6 onwards the response does not return your populated Document class anymore, meaning that your fields are just an AttrDict which is basically a dictionary.
To solve this you need to have a Document class representing the document you want to parse. Then you need to parse the hit dictionary with your document class using the .from_es() method.
Like I answered here.
https://stackoverflow.com/a/64169419/5144029
Also have a look at the Document class here
https://elasticsearch-dsl.readthedocs.io/en/7.3.0/persistence.html
I'm trying to run example named "Using PycURL" from here https://stem.torproject.org/tutorials/to_russia_with_love.html
Everything works fine, but in the final i have this some kind of error:
TypeError : String argument expected, got 'bytes'
Unable to reach http://google.com <<23, 'Failed writing body <0 != 144>'>>
The question is, how can i fix these?
I've tried to use PyCurl as is without any proxy and it works fine.
But this example not works.
I'm running Python 3.4 under Windows, here is my source code http://pastebin.com/zFWrXU5E
Tnanks.
P.S. I need this to work exactly with PyCurl, cuz it is most usefull for my tasks.
P.S. #2 : I did little crutch, seems like it work http://pastebin.com/x8PtL9i3
Heh.
P.S. #3 : Hey! I get the error point, it's in the WRITEFUNCTION of PyCurl, somehow io.StringIO().write function not works ...
Solved.
Problem was in Python 3.4, cuz StringIO object was changed.
All you need is to change output var type from StringIO to BytesIO and then convert bytes to string for printing result.
Here is working source code : http://pastebin.com/Ad8ENTGe
Thanks.
P.S. Who placed -1 ???
haters...
I am using the xgoogle python library to try to search as specific site. The code works for me when I do not use the "site:" indicator in the keyword search. If I do used it, the result set is empty. Does anyone have any thoughts how to get the code below to work?
from xgoogle.search import GoogleSearch, SearchError
gs = GoogleSearch("site:reddit.com fun")
gs.results_per_page = 50
results = gs.get_results()
print results
for res in results:
print res.title.encode("utf8")
print
A simple url with the "q" parameter (e.g. "http://www.google.com/search?&q=site:reddit.com+fun") works, so I assume it's some other problem.
If you are using pkrumins/xgoogle, a quick (and dirty) fix is to modify search.py line 240 as follows:
if not title or not url:
This is because Google changes their SERP layout, which breaks the _extract_description() function.
You can also take a look at this fork.
Put keyword before site:XX. It works for me.