I am running the following python code block using pandas parse_date but get a syntax error, since I am still struggling with finding the proper package for my atom editor to help me with syntax error detection I would appreciate any help I can get on this.
marketing = pd.read_csv.('/Users/name/Folder/marketing.csv', parse_dates=['date_served', 'date_subscribed', 'date_caneled'])
Take out the dot before the first paren.
marketing = pd.read_csv.('/Users/name/Folder/marketing.csv', parse_dates=['date_served', 'date_subscribed', 'date_caneled'])
Related
This question have been asked to death but none of the answers provide an actual workable solution. I had found one previously in get-all-tickers:
pip install get-all-tickers
Recently, for whatever reason, the package get-all-tickers has stopped working:
from get_all_tickers import get_tickers as gt
list_of_tickers = gt.get_tickers()
gives me the error:
ParserError: Error tokenizing data. C error: Expected 1 fields in line 23, saw 46
As this is the only package I found that actually gave a complete ticker list (a good check is "NKLA" which is missing from 100% of all other "solutions" I've found on stackoverflow or elsewhere), I now either need a new way to get up-to-date ticker lists, or a fix to this...
Any ideas?
Another solution would be to load this data as CSV.
Get the CSV from:
https://plextock.com/us-symbols?utm_source=so
See this answer first: https://quant.stackexchange.com/a/1862/38968
NASDAQ makes this information available via FTP and they update it
every night. Log into ftp.nasdaqtrader.com anonymously. Look in the
directory SymbolDirectory. You'll notice two files: nasdaqlisted.txt
and otherlisted.txt. These two files will give you the entire list of
tradeable symbols, where they are listed, their name/description, and
an indicator as to whether they are an ETF.
Given this list, which you can pull each night, you can then query
Yahoo to obtain the necessary data to calculate your statistics.
Also: New York's Stock Exchange provides a search-function:
https://www.nyse.com/listings_directory/stock
.. and this page seems to have a lot as well - it has Nikola/NKLA at least ;)
https://www.advfn.com/nasdaq/nasdaq.asp?companies=N
This pip package was recently broken, Someone has already raised an issue on the projects github (https://github.com/shilewenuw/get_all_tickers/issues/12).
It was caused by an update to the NASDAQ API recently.
Not perfect, but Kaggle has some:
https://www.kaggle.com/datasets/jacksoncrow/stock-market-dataset?resource=download
You could use the free alphavantage API https://www.alphavantage.co/documentation/
example:
import requests
key = '2DHC1EFVR3EOQ33Z' # free key from https://www.alphavantage.co/support/#api-key -- no registration required
result = requests.get('https://www.alphavantage.co/query?function=GLOBAL_QUOTE&symbol=NKLA&apikey='+ key).json()
print(f'The price for NKLA right now is ${result["Global Quote"]["05. price"]}.')
I am trying to build a Google Vision AI product search system. I am using python.
I have uploaded a product set already.
However, when I would like to search the product set with python argparse using below python code, I got an error.
https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/vision/cloud-client/product_search/product_set_management.py
The case was when I typed :
python productsetmanagement.py --project_id="test-as-vision-api-v1" --location="asia-east1" list_product_sets
I could find my product_set : set01
However, when I typed :
python productsetmanagement.py --project_id="test-as-vision-api-v1" --location="asia-east1" --product_set_id="set01" get_product_set
I got an error: the following arguments are required: product_set_id
I have already typed the product set id, may I know why I still got the error? Did I use argparse incorrectly?
Many thanks.
The product_set_id is a positional argument. You would call it like this:
python productsetmanagement.py --project_id="test-as-vision-api-v1" --location="asia-east1" get_product_set "set01"
I want to take a file of one or more bibtex entries and output it as an html-formatted string. The specific style is not so important, but let's just say APA. Basically, I want the functionality of bibtex2html but with a Python API since I'm working in Django. A few people have asked similar questions here and here. I also found someone who provided a possible solution here.
The first issue I'm having is pretty basic, which is that I can't even get the above solutions to run. I keep getting errors similar to ModuleNotFoundError: No module named 'pybtex.database'; 'pybtex' is not a package. I definitely have pybtex installed and can make basic API calls in the shell no problem, but whenever I try to import pybtex.database.whatever or pybtex.plugin I keep getting ModuleNotFound errors. Is it maybe a python 2 vs python 3 thing? I'm using the latter.
The second issue is that I'm having trouble understanding the pybtex python API documentation. Specifically, from what I can tell it looks like the format_from_string and format_from_file calls are designed specifically for what I want to do, but I can't seem to get the syntax correct. Specifically, when I do
pybtex.format_from_file('foo.bib',style='html')
I get pybtex.plugin.PluginNotFound: plugin pybtex.style.formatting.html not found. I think I'm just not understanding how the call is supposed to work, and I can't find any examples of how to do it properly.
Here's a function I wrote for a similar use case--incorporating bibliographies into a website generated by Pelican.
from pybtex.plugin import find_plugin
from pybtex.database import parse_string
APA = find_plugin('pybtex.style.formatting', 'apa')()
HTML = find_plugin('pybtex.backends', 'html')()
def bib2html(bibliography, exclude_fields=None):
exclude_fields = exclude_fields or []
if exclude_fields:
bibliography = parse_string(bibliography.to_string('bibtex'), 'bibtex')
for entry in bibliography.entries.values():
for ef in exclude_fields:
if ef in entry.fields.__dict__['_dict']:
del entry.fields.__dict__['_dict'][ef]
formattedBib = APA.format_bibliography(bibliography)
return "<br>".join(entry.text.render(HTML) for entry in formattedBib)
Make sure you've installed the following:
pybtex==0.22.2
pybtex-apa-style==1.3
Recently I encountered the following problem:
I have an array of strings:
name in ['Mueller', 'Meier', 'Schulze', 'Schmidt']
I face problems with its encoding in Python 3:
name.encode('cp1252')
Here is the full snippet:
target_name = [name.encode('cp1252')
for name in
['Mueller', 'Meier', 'Schulze', 'Schmidt']]
assert_array_equal(arr['surname'],
target_name)
And here is the point where I also get the error. The error states:
Fail in test..... dtype='<|S7>'
I've been searching for a solution for some time, what I found so far is the need of changing the encoding. I applied:
name = np.char.encode('cp1252')
However I get another type of error with it.
Could someone help me with the error tracking?
self.product_urls.extend(hxs.select("//div[#id="product-list"]//div[#class="product-images"]/table/tr[1]//a')").extract())
This line of code gives me an exception "Invalid Path", I guess it's something wrong with "product-list"
how can I write the same #id without getting the error?
Problem is with Extra parentheses , here is correct syntax.
self.product_urls.extend(hxs.select('//div[#id="product-list"]//div[#class="product-images"]/table/tr[1]//a').extract())
google should be your best friend for this kind of issues, you need to learn Xpath/python basics as well