I am basically trying to access crunchbase data through their REST API using python. There is a package available on github that gives me the following documentation. How do I get this "package" ?
The CrunchBase API provides a RESTful interface to the data found on CrunchBase. The response is in JSON format.
Register
Follow the steps below to start using the CrunchBase API:
Sign Up
Login & get API key
Browse the documentation.
Setup
pip install git+git://github.com/anglinb/python-crunchbase**
Up & Running
Import Crunchbase then intialize the Crunchbase object with your api key.
git+git://github.com/anglinb/python-crunchbase
pip install git+https://github.com/anglinb/python-crunchbase.git
You are missing the https
Update: make sure you have git installed on your system.
Add this in your requirements.txt file.
git+https://github.com/user_name/project_name.git
=========
Ideally requirements.txt or reqs.txt will exist in your project's root folder. This file is where all the python libraries' names will be stored along with precise version number.
Here is great deal of information with easy examples related to this topic
https://pip.readthedocs.io/en/1.1/requirements.html
Related
Situation
I have an existing Python app in Google Colab that calls the Twitter API and sends the response to Cloud Storage.
I'm trying to automate the Twitter API call in GCP, and am wondering how I install the requests library for the API call, and install os for authentication.
I tried doing the following library installs in a Cloud Function:
import requests
import os
Result
That produced a resulting error message:
Deployment failure: Function failed on loading user code.
Do I need to install those libraries in a Cloud Function? I'm trying to understand this within the context of my Colab python app, but am not clear if the library installs are necessary.
Thank you for any input.
when you create your cloud function source code , there are two files.
main.py
requirements.txt
Add packages in requirements.txt as below
#Function dependencies, for example:
requests==2.20.0
creating a new python environment for your project might help and would be a good start for any project
it is easy to create.
## for unix-based systems
## create a python environment
python3 -m venv venv
## activate your environment
## in linux-based systems
. ./venv/bin/activate
if you are using google colab, add "!" before these commands, they should work fine.
I'm using the command func azure functionapp publish to publish my python function app to Azure. As best I can tell, the command only packages up the source code and transfers it to Azure, then on a remote machine in Azure, the function app is "built" and deployed. The build phase includes the collection of dependencies from pypi. Is there a way to override where it looks for these dependencies? I'd like to point it to my ow pypi server, or alternatively, provide the wheels locally in my source tree and have it use those. I have a few questions/problems:
Are my assumptions correct?
Assuming they are, is this possible, and how?
I've tried a few things, read some docs, looked at the various --help options in the CLI tool, I've set up a pip.conf file that I've verified works for local pip usage, then on purpose "broken it" and tried to see if the publish would fail (it did not, so this leads me to believe it ignores pip.conf, or the build (and collection of dependencies happens on the remote end). I'm at a loss and any tips, pointers, or answers are appreciated!
You can add additional pip source to point to your own pypi server. Check https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python#remote-build-with-extra-index-url
Remote build with extra index URL:
When your packages are available from an accessible custom package index, use a remote build. Before publishing, make sure to create an app setting named PIP_EXTRA_INDEX_URL. The value for this setting is the URL of your custom package index. Using this setting tells the remote build to run pip install using the --extra-index-url option. To learn more, see the Python pip install documentation.
You can also use basic authentication credentials with your extra package index URLs. To learn more, see Basic authentication credentials in Python documentation.
And regarding referring local packages, that is also possible. Check https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python#install-local-packages
I hope both of your questions are answered now.
I would like to use my own package located in private repository.
Actually, i use requirements.txt file to install dependencies in my Python AppEngine.
However, i don't understand where do I have to add my private dependency.
Thanks
You have to add the private package to the requirements.txt by using the git+ssh protocol. Better instructions can be found here: Is it possible to use pip to install a package from a private github repository?
However, I am not sure how GAE handles private keys required to access the package. If GAE only copies the lib directory from the AppEngine project repository without other magical operations upon deployment, it should work. If AppEngine only uses the lib dependencies to collect the names of the packages and then does something funny by itself, I guess you would be out of luck, if there does not exist a way to give credentials to the AppEngine for deployment.
I am unable to test this now, but will update as soon as I have.
I need to download Flicker YFCC-100M dataset. I have amazon AWS account but could not figure out way to download dataset.
There is blog but it is not clear for me to download the dataset
With flicker API, I can download images but that will not be YFCC100M.
Here is one suggestion but awscli could not installed on my system.
>> sudo apt install awscli
>> ..........
>> Error: Unable to correct problems, you have held broken packages.
Is there any easy way to get this dataset downloaded.
This assumes that you already have pip and either Python 2.6.5+ or Python 3.3+ installed on your system. If you want to install awscli, you'll need to run
pip install awscli --upgrade --user
You can read more about installing the AWS Command Line Interface (CLI) here.
In addition, i think this link would let you gain access to the dataset that you are looking for.
You need to register on the Yahoo Webscope website and add this dataset to the "Cart".
After submitting your request for the dataset, you should receive an email with instructions. I am reproducing a part of this email, after scrubbing out some of the details and privileged information.
Download and install s3cmd from http://s3tools.org/download (or using an appropriate package manager for your platform)
Run 's3cmd --configure' and enter your access key and secret ( available via XXXXXXXX <-- the actual link will be in their email
). Here you can also specify additional options, such as enabling
encryption during transfer, and enabling a proxy.
Run 's3cmd ls s3://yahoo-webscope/XXXXXXX/' to view the S3 objects for I3 - Yahoo Flickr Creative Commons 100M (14G) (Hosted on AWS)
Run 's3cmd get --recursive s3://yahoo-webscope/XXXXXXX/' to download a local copy of I3 - Yahoo Flickr Creative Commons 100M (14G)
(Hosted on AWS)
It should be easy for you to follow these steps and get the dataset. I agree, the steps are not very transparent in their website!
I'm trying to deploy a Flask web app with mysql connectivity. It's my first time using Azure, and coming off Linux it all seems pretty confusing.
My understanding is that one includes within the requirements.txt to include the packages required. When I build the default Flask app from Azure the file looks like this:
Flask<1
At this stage the site loads fine.
If I then include an additional line
https://cdn.mysql.com/Downloads/Connector-Python/mysql-connector-python-2.1.14.tar.gz
As per this answer https://stackoverflow.com/a/34489738/2697874
Then in my views.py file (which seems to be broadly synonymous to my old app.py file) I include...import mysql.connector
I then restart and reload my site...which then returns the error The page cannot be displayed because an internal server error has occurred.
Error logging spits out a load of html (seems pretty weird way to deliver error logs - so I must be missing something here). When I save to html and load it up I get this...
How can I include the mysql.connector library within my Flask web app?
Per my experience, the resoure https://cdn.mysql.com/Downloads/Connector-Python/mysql-connector-python-2.1.14.tar.gz is for Linux, not for Azure WebApps based on Windows, and the link seems to be not available now.
I used the command pip search mysql-connector to list the related package. Then, I tried to use mysql-connector instead of mysql-connector-python via pip install, and tried to import mysql.connector in local Python interpreter that works fine.
So please use mysql-connector==2.1.4 instead of mysql-connector-python== in the requirements.txt file of your project using IDE, then re-deploy the project on Azure and try again. The package will be installed automatically as the offical doc said as below.
Package Management
Packages listed in requirements.txt will be installed automatically in the virtual environment using pip. This happens on every deployment, but pip will skip installation if a package is already installed.
Any update, please feel free to let me know.