Basic gRPC version not working - python

I was trying to implement a simple gRPC server/client and have narrowed down the problem to even basic gRPC Python implementations not working on my machine.
Here is what I tried:
pip install grpcio grpcio-tools
git clone https://github.com/grpc/grpc.git
cd grpc/examples/python/route_guide
python run_codegen.py # Everything breaks whether I include this step or not
python route_guide_server.py
And getting the following error:
-------------- GetFeature --------------
Traceback (most recent call last):
File "route_guide_client.py", line 119, in <module>
run()
File "route_guide_client.py", line 109, in run
guide_get_feature(stub)
File "route_guide_client.py", line 48, in guide_get_feature
latitude=409146138, longitude=-746188906))
File "route_guide_client.py", line 34, in guide_get_one_feature
feature = stub.GetFeature(point)
File "/Users/p/anaconda/envs/py36/lib/python3.6/site-packages/grpc/_channel.py", line 514, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/Users/p/anaconda/envs/py36/lib/python3.6/site-packages/grpc/_channel.py", line 448, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNIMPLEMENTED
details = "Method not found!"
debug_error_string = "{"created":"#1530451116.454542000","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1083,"grpc_message":"Method not found!","grpc_status":12}"
>
More detail on my machine config if useful.

Realised that the port 50051 was occupied!

Related

Problems with data loading with Grakn 1.7.1

I have a migration script that I use to load data into Grakn via the Python driver with Grakn Core 1.6.2. This works.
I have recently downloaded 1.7.1, but when I run the same migration script I get the following error:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/grakn/service/Session/TransactionService.py", line 161, in send
response = next(self._response_iterator)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/grpc/_channel.py", line 388, in __next__
return self._next()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/grpc/_channel.py", line 382, in _next
raise self
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = ""
debug_error_string = "{"created":"#1589227241.242279000","description":"Error received from peer ipv6:[::1]:48555","file":"src/core/lib/surface/call.cc","file_line":1055,"grpc_message":"","grpc_status":3}"
>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "migrator.py", line 240, in <module>
insertSchema(URI, KEYSPACE)
File "/Users/johnnie/Documents/grain/insert.py", line 21, in insertSchema
write_transaction.query(schema)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/grakn/client.py", line 131, in query
return self._tx_service.query(query, infer)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/grakn/service/Session/TransactionService.py", line 49, in query
response = self._communicator.send(request)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/grakn/service/Session/TransactionService.py", line 165, in send
raise GraknError("Server/network error: {0}\n\n generated from request: {1}".format(e, request))
grakn.exception.GraknError.GraknError: Server/network error: <_Rendezvous of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = ""
debug_error_string = "{"created":"#1589227241.242279000","description":"Error received from peer ipv6:[::1]:48555","file":"src/core/lib/surface/call.cc","file_line":1055,"grpc_message":"","grpc_status":3}"
>
generated from request: query_req {
[SCHEMA IS SHOWN HERE]
}
Any help is greatly appreciated.
Are you using the latest release of both the Python Grakn client and Grakn server? Grakn core 1.7 is not compatible with pre-1.7 clients.
I have the same issue when using session.transaction().read(), when it call TransactionService.py
Becaues Grakn Core 1.7.1 does not support KGMS
Detail
You need to write
pip install --upgrade grakn-client
Sources: python client api docs and pip install docs
(the --upgrade flag because it is already installed.)

Speech-to-text: google.api_core.exceptions.PermissionDenied: 403

I am trying to use Google speech-to-text service, according to https://googleapis.github.io/google-cloud-python/latest/speech/index.html
I have created project, uploaded audio to gs: cloud, added permissions, downloaded json file named My First Project-7bb85a480131.json. https://console.cloud.google.com/storage/browser/mybucket?project=my-project
that is my file:
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="/home/joo/Документы/LocalRepository/robotze/My First Project-7bb85a480131.json"
from google.cloud import speech
client = speech.SpeechClient()
audio = speech.types.RecognitionAudio(
uri='gs://zaudio/audio.mp3')
config = speech.types.RecognitionConfig(
encoding=speech.enums.RecognitionConfig.AudioEncoding.LINEAR16,
language_code='ru-RU',
sample_rate_hertz=44100)
operation = client.long_running_recognize(config=config, audio=audio)
op_result = operation.result()
for result in op_result.results:
for alternative in result.alternatives:
print('=' * 20)
print(alternative.transcript)
print(alternative.confidence)
Issue: i got
google.api_core.exceptions.PermissionDenied: 403 my-service-account#my-project.iam.gserviceaccount.com does not have storage.objects.get access to mybucket/audio.mp3.
Full traceback
/home/joo/anaconda3/bin/python /home/joo/Документы/LocalRepository/robotze/speech-to-text-googlecloud.py
Traceback (most recent call last):
File "/home/joo/anaconda3/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
return callable_(*args, **kwargs)
File "/home/joo/anaconda3/lib/python3.6/site-packages/grpc/_channel.py", line 565, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/home/joo/anaconda3/lib/python3.6/site-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.PERMISSION_DENIED
details = "my-service-account#my-project.iam.gserviceaccount.com does not have storage.objects.get access to mybucket/audio.mp3."
debug_error_string = "{"created":"#1565253582.126380437","description":"Error received from peer ipv4:74.125.131.95:443","file":"src/core/lib/surface/call.cc","file_line":1052,"grpc_message":"my-service-account#my-project.iam.gserviceaccount.com does not have storage.objects.get access to mybucket/audio.mp3.","grpc_status":7}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/joo/Документы/LocalRepository/robotze/speech-to-text-googlecloud.py", line 46, in <module>
operation = client.long_running_recognize(config=config, audio=audio)
File "/home/joo/anaconda3/lib/python3.6/site-packages/google/cloud/speech_v1/gapic/speech_client.py", line 341, in long_running_recognize
request, retry=retry, timeout=timeout, metadata=metadata
File "/home/joo/anaconda3/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__
return wrapped_func(*args, **kwargs)
File "/home/joo/anaconda3/lib/python3.6/site-packages/google/api_core/retry.py", line 273, in retry_wrapped_func
on_error=on_error,
File "/home/joo/anaconda3/lib/python3.6/site-packages/google/api_core/retry.py", line 182, in retry_target
return target()
File "/home/joo/anaconda3/lib/python3.6/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
return func(*args, **kwargs)
File "/home/joo/anaconda3/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.PermissionDenied: 403 my-service-account#my-project.iam.gserviceaccount.com does not have storage.objects.get access to mybucket/audio.mp3.
Process finished with exit code 1
What i tried: gcloud auth application-default login - login in browser works, but still 403 error
From what i can see on your logs, you are able to authenticate your service account inside your code (you are currently authenticating with: starting-account-*******-239919.iam.gserviceaccount.com), however, that service account doesn't have "storage.objects.get" permission over the object "zaudio/audio.mp3".
So you can either:
A.- Give the proper permissions to that service account (may be the role "storage.objectViewer" inside that bucket would be enough, but you could also set it with the role "storage.admin" so it can have more control over that bucket and others).
B.- Authenticate using other service account that have the proper permissions.
I resolved the following issue:
“google.api_core.exceptions.PermissionDenied: 403 my-service-account#my-project.iam.gserviceaccount.com does not have storage.objects.get access to mybucket/audio.mp3.”
To resolve this issue: go to your bucket, click that three dots, choose “Edit permissions”, Entity set “User”, Name set your email (in this case, my-service-account#my-project.iam.gserviceaccount.com), Access: “Reader”. Save and try again. This should resolve this issue. Regardless whether you have created the bucket, whatnot, you have to do this step to explicitly set permission. Hopefully this is useful.

403 Forbidden when connecting to S3 bucket in AWS Cloud using Toil

I am a newbie in Toil and AWS trying to run HelloWorld.py example in the Toil Document. I have already successfully installed toil and related python packages on my local mac laptop and have setup my account at AWS. I have created a small leader/worker cluster
$ cgcloud create-cluster toil -s 2 -t m3.large
and started it:
$ cgcloud ssh toil-leader
This changed my screen prompt to:
mesosbox#ip-172-31-25-135:~$
Then from an other window on my mac, I started the Toil HellowWorld example with with command:
$ python2.7 HelloWorld.py --batchSystem=mesos --mesosMaster=mesos-master:5050 aws:us-west-2:my-aws-jobstore
And I got the following output:
Apples-Air 2017-06-02 19:30:53,524 MainThread INFO toil.lib.bioio: Root logger is at level 'INFO', 'toil' logger at level 'INFO'.
Apples-Air 2017-06-02 19:30:53,524 MainThread INFO toil.lib.bioio: Root logger is at level 'INFO', 'toil' logger at level 'INFO'.
Apples-Air 2017-06-02 19:30:54,852 MainThread WARNING toil.jobStores.aws.jobStore: Exception during panic
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/toil/jobStores/aws/jobStore.py", line 209, in initialize
self.destroy()
File "/usr/local/lib/python2.7/site-packages/toil/jobStores/aws/jobStore.py", line 1334, in destroy
self._bind(create=False, block=False)
File "/usr/local/lib/python2.7/site-packages/toil/jobStores/aws/jobStore.py", line 241, in _bind
versioning=True)
File "/usr/local/lib/python2.7/site-packages/toil/jobStores/aws/jobStore.py", line 721, in _bindBucket
bucket = self.s3.get_bucket(bucket_name, validate=True)
File "/usr/local/lib/python2.7/site-packages/boto/s3/connection.py", line 502, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/usr/local/lib/python2.7/site-packages/boto/s3/connection.py", line 535, in head_bucket
raise err
S3ResponseError: S3ResponseError: 403 Forbidden
Traceback (most recent call last):
File "helloWorld.py", line 22, in <module>
print(Job.Runner.startToil(j, options)) #Prints Hello, world!, ….
File "/usr/local/lib/python2.7/site-packages/toil/job.py", line 740, in startToil
with Toil(options) as toil:
File "/usr/local/lib/python2.7/site-packages/toil/common.py", line 614, in __enter__
jobStore.initialize(config)
File "/usr/local/lib/python2.7/site-packages/toil/jobStores/aws/jobStore.py", line 209, in initialize
self.destroy()
File "/usr/local/lib/python2.7/site-packages/toil/jobStores/aws/jobStore.py", line 206, in initialize
self._bind(create=True)
File "/usr/local/lib/python2.7/site-packages/toil/jobStores/aws/jobStore.py", line 241, in _bind
versioning=True)
File "/usr/local/lib/python2.7/site-packages/toil/jobStores/aws/jobStore.py", line 721, in _bindBucket
bucket = self.s3.get_bucket(bucket_name, validate=True)
File "/usr/local/lib/python2.7/site-packages/boto/s3/connection.py", line 502, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/usr/local/lib/python2.7/site-packages/boto/s3/connection.py", line 535, in head_bucket
raise err
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
Please help.
Thanks.
---John
I realize that this answer is a little late. One problem I notice is with the mesosMaster argument.
Instead, your command should have look like
python2.7 HelloWorld.py --batchSystem=mesos --mesosMaster=172.31.25.135:5050 aws:us-west-2:my-aws-jobstore
Notice that I replaces mesos-master with the actual IP address from
mesosbox#ip-172-31-25-135:~$
Hopefully in the future, one will not need to pass this argument at all, however this is not yet implemented as of 26 July 2017.
Also for further problems with Toil you will probably have better luck posting a new issue to the Toil Github page.

Python: pip spitting out long errors when trying to install packages [duplicate]

coincidentally, I run pip search django command and I got time out error. even specifing a high value of timeout
Below the logs:
D:\PERFILES\rmaceissoft\virtualenvs\fancy_budget\Scripts>pip search django --timeout=300
Exception:
Traceback (most recent call last):
File "D:\PERFILES\Marquez\rmaceissoft\Workspace\virtualenvs\fancy_budget\lib\s
ite-packages\pip-1.1-py2.7.egg\pip\basecommand.py", line 104, in main
status = self.run(options, args)
File "D:\PERFILES\Marquez\rmaceissoft\Workspace\virtualenvs\fancy_budget\lib\s
ite-packages\pip-1.1-py2.7.egg\pip\commands\search.py", line 34, in run
pypi_hits = self.search(query, index_url)
File "D:\PERFILES\Marquez\rmaceissoft\Workspace\virtualenvs\fancy_budget\lib\s
ite-packages\pip-1.1-py2.7.egg\pip\commands\search.py", line 48, in search
hits = pypi.search({'name': query, 'summary': query}, 'or')
File "C:\Python27\Lib\xmlrpclib.py", line 1224, in __call__
return self.__send(self.__name, args)
File "C:\Python27\Lib\xmlrpclib.py", line 1575, in __request
verbose=self.__verbose
File "C:\Python27\Lib\xmlrpclib.py", line 1264, in request
return self.single_request(host, handler, request_body, verbose)
File "C:\Python27\Lib\xmlrpclib.py", line 1297, in single_request
return self.parse_response(response)
File "C:\Python27\Lib\xmlrpclib.py", line 1462, in parse_response
data = stream.read(1024)
File "C:\Python27\Lib\httplib.py", line 541, in read
return self._read_chunked(amt)
File "C:\Python27\Lib\httplib.py", line 574, in _read_chunked
line = self.fp.readline(_MAXLINE + 1)
File "C:\Python27\Lib\socket.py", line 476, in readline
data = self._sock.recv(self._rbufsize)
timeout: timed out
Storing complete log in C:\Users\reiner\AppData\Roaming\pip\pip.log
however, another search command finish without problems:
pip search django-registration
Is that a bug of pip due to the big amount of packages name that contains "django"?
Note: speed internet connection = 2 Mbits
the --timeout option doesn't seem to work properly.
I can install django properly by using either:
pip --default-timeout=60 install django
or
export PIP_DEFAULT_TIMEOUT=60
pip install django
Note: using pip version 1.2.1 on RHEL 6.3
Source: DjangoDay2012-Brescia.pdf, page 11
The pypi is probably overloaded. Just enable mirror fallback and caching in pip. Maybe tune the timeout a bit. Add these in ~/.pip/pip.conf:
[global]
default-timeout = 60
download-cache = ~/.pip/cache
[install]
use-mirrors = true
There is too short default timeout set for pip by default. You should really set this environment variable PIP_DEFAULT_TIMEOUT to at least 60 (1 minute)
Source: http://www.pip-installer.org/en/latest/configuration.html

Framing Errors in Celery 3.0.1

I recently upgraded to Celery 3.0.1 from 2.3.0 and all the tasks run fine. Unfortunately. I'm getting a "Framing Error" exception pretty frequently. I'm also running supervisor to restart the threads but since these are never really killed supervisor has no way of knowing that celery needs to be restarted. Has anyone seen this before?
2012-07-13 18:53:59,004: ERROR/MainProcess] Unrecoverable error: Exception('Framing Error, received 0x00 while expecting 0xce',)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/worker/__init__.py", line 350, in start
component.start()
File "/usr/local/lib/python2.7/dist-packages/celery/worker/consumer.py", line 360, in start
self.consume_messages()
File "/usr/local/lib/python2.7/dist-packages/celery/worker/consumer.py", line 445, in consume_messages
drain_nowait()
File "/usr/local/lib/python2.7/dist-packages/kombu/connection.py", line 175, in drain_nowait
self.drain_events(timeout=0)
File "/usr/local/lib/python2.7/dist-packages/kombu/connection.py", line 171, in drain_events
return self.transport.drain_events(self.connection, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/kombu/transport/amqplib.py", line 262, in drain_events
return connection.drain_events(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/kombu/transport/amqplib.py", line 97, in drain_events
chanmap, None, timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/kombu/transport/amqplib.py", line 155, in _wait_multiple
channel, method_sig, args, content = read_timeout(timeout)
File "/usr/local/lib/python2.7/dist-packages/kombu/transport/amqplib.py", line 129, in read_timeout
return self.method_reader.read_method()
File "/usr/local/lib/python2.7/dist-packages/amqplib/client_0_8/method_framing.py", line 221, in read_method
raise m
Exception: Framing Error, received 0x00 while expecting 0xce
While I am not sure why this actually happens, switching from amqplib to librabbitmq helped me to overcome this trouble.
I haven't changed anything in configuration, just:
pip uninstall amqplib
pip install librabbitmq
And restarted celery workers.
Got this idea form https://github.com/celery/celery/issues/922

Categories

Resources