gcutil moveinstances failing due to "KeyError: u'CPUS'" - python

I am trying to move my micro-sized Compute Engine from us-central2-a to us-central1-a, since Google will be doing maintenance on the first zone in a week. I am running gcutil-1.9.0 on my Windows machine, via Cygwin.
I ran the exact command they suggested:
gcutil moveinstances --replace_deprecated --source_zone=us-central2-a --destination_zone=us-central1-a ".*" --project=careful-isotope-239
and got the following result:
Checking destination zone...
Retrieving instances in us-central2-a matching: .*...
Checking disk preconditions...
Checking quotas...
KeyError: u'CPUS'
So, this is evidently a Python error, but I have no idea how to proceed. Anybody have ideas?
Thanks,
Tim

You should use --service_version=v1beta15 flag, they've broken the API for getzone (moveinstances is tryind to verify CPUS quota).

Related

How to fix google.auth.exceptions.DefaultCredentialsError: The cs.json does not have a valid type. Type is None, expected one of ('authorized_user',

How to fix, "google.auth.exceptions.DefaultCredentialsError: The file C:\Users\camer\Desktop\Video projects__SCRIPTS_VIDEOMASHUP\client_secret_672522134118-dv5sbe08lbtdi7f9o38k52albg7ms37v.apps.googleusercontent.com.json does not have a valid type. Type is None, expected one of ('authorized_user', 'service_account', 'external_account', 'impersonated_service_account')."
I'm using anaconda. I'm trying to get this https://github.com/LamboCreeper/video-mashup to work. I think the problem is my variables aren't set correctly. I use this to setup my google credentials :
[The setting variables attempt][1]
([1]:https://i.stack.imgur.com/w61E8.png)
It returned nothing so I thought it worked.
Then, I ran the next line of code (python3 video-mashup <source> <destination> <sentence>) and got this error:
[The code][2]([2]: https://i.stack.imgur.com/OJDFY.png)
Also, I when ran the code, you can see on the first line, I change python3 to python because the code wasnt finding the python.

"Searching..." takes forever on readthedocs when the phrase is not present on the page

When I search for a phrase which is not present on the readthedocs page, I get the message:
"Searching...", which will take forever. On the contrary, when I search for some known phrase, I get the results within a second.
I have looked into page's console:
The resource from “https://xxxxxx.readthedocs.io/en/latest/_static/css/yy.css” was blocked due to MIME type (“text/html”) mismatch (X-Content-Type-Options: nosniff).
Read the Docs search failed. Falling back to Sphinx search.
I have tried:
[conf.py] I have line:
html_css_files = [
"css/yy.css",
]
so I added:
app.add_css_file(html_css_files) in def setup(app):
but this caused build error in readthedocs
I have added in conf.py:
notfound_urls_prefix = "/projects/xxxxxx/en/latest/"
but this also didn't help.
Have you encountered something similar? If so, how have you solved the problem?
In the end, the root-cause was somewhere else.
When I looked into raw output on rtd, I found out that sphinx_rtd_theme was installed with 0.4.3, which is not the latest. The latest is 0.5.2. So I pinned the latest version in my docs/requiremets.txt. This fixed the searching problem.

Python get all stock Tickers

This question have been asked to death but none of the answers provide an actual workable solution. I had found one previously in get-all-tickers:
pip install get-all-tickers
Recently, for whatever reason, the package get-all-tickers has stopped working:
from get_all_tickers import get_tickers as gt
list_of_tickers = gt.get_tickers()
gives me the error:
ParserError: Error tokenizing data. C error: Expected 1 fields in line 23, saw 46
As this is the only package I found that actually gave a complete ticker list (a good check is "NKLA" which is missing from 100% of all other "solutions" I've found on stackoverflow or elsewhere), I now either need a new way to get up-to-date ticker lists, or a fix to this...
Any ideas?
Another solution would be to load this data as CSV.
Get the CSV from:
https://plextock.com/us-symbols?utm_source=so
See this answer first: https://quant.stackexchange.com/a/1862/38968
NASDAQ makes this information available via FTP and they update it
every night. Log into ftp.nasdaqtrader.com anonymously. Look in the
directory SymbolDirectory. You'll notice two files: nasdaqlisted.txt
and otherlisted.txt. These two files will give you the entire list of
tradeable symbols, where they are listed, their name/description, and
an indicator as to whether they are an ETF.
Given this list, which you can pull each night, you can then query
Yahoo to obtain the necessary data to calculate your statistics.
Also: New York's Stock Exchange provides a search-function:
https://www.nyse.com/listings_directory/stock
.. and this page seems to have a lot as well - it has Nikola/NKLA at least ;)
https://www.advfn.com/nasdaq/nasdaq.asp?companies=N
This pip package was recently broken, Someone has already raised an issue on the projects github (https://github.com/shilewenuw/get_all_tickers/issues/12).
It was caused by an update to the NASDAQ API recently.
Not perfect, but Kaggle has some:
https://www.kaggle.com/datasets/jacksoncrow/stock-market-dataset?resource=download
You could use the free alphavantage API https://www.alphavantage.co/documentation/
example:
import requests
key = '2DHC1EFVR3EOQ33Z' # free key from https://www.alphavantage.co/support/#api-key -- no registration required
result = requests.get('https://www.alphavantage.co/query?function=GLOBAL_QUOTE&symbol=NKLA&apikey='+ key).json()
print(f'The price for NKLA right now is ${result["Global Quote"]["05. price"]}.')

Tor API example not works correct

I'm trying to run example named "Using PycURL" from here https://stem.torproject.org/tutorials/to_russia_with_love.html
Everything works fine, but in the final i have this some kind of error:
TypeError : String argument expected, got 'bytes'
Unable to reach http://google.com <<23, 'Failed writing body <0 != 144>'>>
The question is, how can i fix these?
I've tried to use PyCurl as is without any proxy and it works fine.
But this example not works.
I'm running Python 3.4 under Windows, here is my source code http://pastebin.com/zFWrXU5E
Tnanks.
P.S. I need this to work exactly with PyCurl, cuz it is most usefull for my tasks.
P.S. #2 : I did little crutch, seems like it work http://pastebin.com/x8PtL9i3
Heh.
P.S. #3 : Hey! I get the error point, it's in the WRITEFUNCTION of PyCurl, somehow io.StringIO().write function not works ...
Solved.
Problem was in Python 3.4, cuz StringIO object was changed.
All you need is to change output var type from StringIO to BytesIO and then convert bytes to string for printing result.
Here is working source code : http://pastebin.com/Ad8ENTGe
Thanks.
P.S. Who placed -1 ???
haters...

Dumbo(Python)/Hadoop unexpected output

I'm trying to execute the following code with dumbo(Python) / haddop
https://github.com/klbostee/dumbo/wiki/Short-tutorial#jobs-and-runners
I followed the tutorial correctly, I have done every step but when I run code in hadoop environment I obtain as output as follows:
SEQ/org.apache.hadoop.typedbytes.TypedBytesWritable/org.apache.hadoop.typedbytes.TypedBytesWritable�������ޭǡ�q���%�O��������������172.16.1.10������������������172.16.1.12������������������172.16.1.30������
It should return a list of IP addresses with connections counter.
Why those characters appear? Is it an encoding problem? How do I fix it? Thanks
Also if I try other programs in the tutorial, I have the same problem.
I answer by myself. That output is the serialized form of Dumbo. There is no error.
To convert it into a readable text, it's sufficient the follow command (the answer was in the tutorial ! I don't saw it)
dumbo cat ipcounts/part* -hadoop /usr/local/hadoop | sort -k2,2nr | head -n 5

Categories

Resources