I am building a CI/CD azure pipeline to build and publish an azure function from a DevOps repo to Azure. The function in question uses a custom SDK stored as a Python package Artifact in an organisation scoped feed.
If I use a pip authenticate task to be able to access the SDK, the task passes but the pipeline then crashes when installing the requirements.txt. Strangely, before we get to the SDK, there is an error installing the azure-functions package. If I remove the SDK requirement and the pip authenticate task this error does not occur however. So something about the authenticate task means the agent cannot access azure-functions.
Additionally, if I swap the order of 'azure-functions' and 'CustomSDK' in the requierments.txt, the agent is still unable to install the SDK artifact so something must be wrong with the authentication task:
steps:
- task: PipAuthenticate#1
displayName: 'Pip Authenticate'
inputs:
artifactFeeds: <organisation-scoped-feed>
pythonDownloadServiceConnections: <service-connection-to-SDK-URL>
Why can I not download these packages?
This was due to confusion around the extra index url. In order to access both PyPI and the artifact feed, the following settings need to be set:
- task: PipAuthenticate#1
displayName: 'Pip Authenticate'
inputs:
pythonDownloadServiceConnections: <service-connection-to-SDK-Feed>
onlyAddExtraIndex: true
This way pip will consult PyPI first, and then the artifact feed.
Try running the function while the _init_.py file is active on the screen.
If you're just trying out the Quickstart, you shouldn't need to change anything in the function.json file. When you start debugging, make sure you're looking at the _init_.py file.
When you run the trigger, make sure you're on the _init_ .py file. Otherwise, VS Code will try to run the current active window's file.
Related
Situation
I have an existing Python app in Google Colab that calls the Twitter API and sends the response to Cloud Storage.
I'm trying to automate the Twitter API call in GCP, and am wondering how I install the requests library for the API call, and install os for authentication.
I tried doing the following library installs in a Cloud Function:
import requests
import os
Result
That produced a resulting error message:
Deployment failure: Function failed on loading user code.
Do I need to install those libraries in a Cloud Function? I'm trying to understand this within the context of my Colab python app, but am not clear if the library installs are necessary.
Thank you for any input.
when you create your cloud function source code , there are two files.
main.py
requirements.txt
Add packages in requirements.txt as below
#Function dependencies, for example:
requests==2.20.0
creating a new python environment for your project might help and would be a good start for any project
it is easy to create.
## for unix-based systems
## create a python environment
python3 -m venv venv
## activate your environment
## in linux-based systems
. ./venv/bin/activate
if you are using google colab, add "!" before these commands, they should work fine.
I need to have private Python packages in GCP usable in multiple projects. I haven't tried the Artifact Registry since that's still in alpha, so right now I've been trying with simple repositories, but I'm open to alternatives.
I have a Python package source code in a GCP Repository in Project A, and I have a cloud function in a repository also in Project A. In this cloud function I import the mentioned package by adding git+https://source.developers.google.com/p/project-a/r/my-python package in my requirements.txt file.
If I deploy this cloud function in Project A via gcloud functions in my terminal, specifying --source=https://source.developers.google.com/projects/project-a/repos/my-cloud-function and --project=project-a, it works fine, and the function can successfully import the elements from the package when I call it, but if I deploy this function in Project B instead, I get the following error:
Deploying function (may take a while - up to 2 minutes)...failed.
ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Build failed: `pip_download_wheels` had stderr output:
Running command git clone -q https://source.developers.google.com/p/project-a/r/my-python-package /tmp/pip-req-build-f_bcp4y9
remote: PERMISSION_DENIED: The caller does not have permission
remote: [type.googleapis.com/google.rpc.RequestInfo]
remote: request_id: "abe4(...)"
fatal: unable to access 'https://source.developers.google.com/p/project-a/r/my-python-package/': The requested URL returned error: 403
ERROR: Command errored out with exit status 128: git clone -q https://source.developers.google.com/p/project-a/r/my-python-package /tmp/pip-req-build-f_bcp4y9 Check the logs for full command output.
This seems like a permissions issue. However, if I remove the package dependency from requirements.txt, it deploys fine, which means that Project B indeed has access to repos from Project A, so it seems like a issue inside Pip. However, Pip has no problems if I deploy to Project A, so I'm a little lost.
Many thanks in advance.
Artifact Registry is GA and no longer on Alpha/Beta since last year.
I replicated your issue. The error is indeed due to permissions, it didn't happened on the deployment when you remove the line on the requirements.txt, probably because the credentials had access to both projects.
In order to make the deployment correct you have to add the permissions on the repository to the service account that makes the deployment (which is the CF service account) that can be found on Cloud Functions - (select your Cloud Function) - Details, it should be something like project#appspot.gserviceaccount.com
Once you have located the service account add it to the Cloud Repository by clicking on Settings - Permissions and add it with at least the Source Repository Reader role
I'm using the command func azure functionapp publish to publish my python function app to Azure. As best I can tell, the command only packages up the source code and transfers it to Azure, then on a remote machine in Azure, the function app is "built" and deployed. The build phase includes the collection of dependencies from pypi. Is there a way to override where it looks for these dependencies? I'd like to point it to my ow pypi server, or alternatively, provide the wheels locally in my source tree and have it use those. I have a few questions/problems:
Are my assumptions correct?
Assuming they are, is this possible, and how?
I've tried a few things, read some docs, looked at the various --help options in the CLI tool, I've set up a pip.conf file that I've verified works for local pip usage, then on purpose "broken it" and tried to see if the publish would fail (it did not, so this leads me to believe it ignores pip.conf, or the build (and collection of dependencies happens on the remote end). I'm at a loss and any tips, pointers, or answers are appreciated!
You can add additional pip source to point to your own pypi server. Check https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python#remote-build-with-extra-index-url
Remote build with extra index URL:
When your packages are available from an accessible custom package index, use a remote build. Before publishing, make sure to create an app setting named PIP_EXTRA_INDEX_URL. The value for this setting is the URL of your custom package index. Using this setting tells the remote build to run pip install using the --extra-index-url option. To learn more, see the Python pip install documentation.
You can also use basic authentication credentials with your extra package index URLs. To learn more, see Basic authentication credentials in Python documentation.
And regarding referring local packages, that is also possible. Check https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python#install-local-packages
I hope both of your questions are answered now.
Azure Artifacts allows posting a module to an Artifactory that can then be installed by using pip by setting extra-index-url in pip.ini (Windows) or pip.conf (Mac/Linux)
However, when using pip install, the system is asking for a user/password
Is it possible to setup this inside pip.conf and / or even better use .ssh signatures?
I was facing the same issue, a workaround solution that worked for me. To bypass the entire process Lance Li-MSFT mentioned:
It will ask your credentials and keep it in local cache, and it won't ask for user and password again if everything is ok
)
In the pip.ini / pip.conf file, add:
[global]
extra-index-url=https://<Personal Access Token>#pkgs.dev.azure.com/<Organization Name>/_packaging/<Feed Name>/pypi/simple/
This will be useful if you are in an environment where you can't do the first interactive login (Example Use-Case: Setting up an Azure Databricks from Azure Machine Learning Workspace and installing required packages).
Is it possible to setup this inside pip.conf and / or even better use
.ssh signatures?
What you met is expected behavior if it's the first time that you try to connect to the Azure Devops feed.
It will ask your credentials and keep it in local cache, and it won't ask for user and password again if everything is ok. Check my log:
We should note:
1.The Python Credential Provider is an artifacts-keyring package. It's used to keep the credentials instead of other options like pip.conf or .ssh.
2.What it asks for is a PAT. For me, I enter the pat in both User and Password inputs.
3.If you still need to enter the password every time when connecting to the feed, there must be something wrong with your Python Credential Provider(artifacts-keyring) package. Make sure you install this package successfully before running the pip install command.
4.There're two options(It seems you're using option2) to connect to the feed, they both need the artifacts keyring package to save the credentials. For me in windows environemnt, it's easy to install that package. But if you're in Linux environment, you should check the step4 in Get Tools button carefully:
Here's the link of prerequisites above.
Hope all above helps :)
Some background: I have set up Airflow on Kubernetes (on AWS). I am able to run DAGs that query a database, send emails or do anything that doesn't require a package that isn't already a part of Airflow. For example, if I try to run a DAG that uses the Facebook-business SDK the DAG will obviously break because the dependency isn't available. I've tried a couple different ways of trying to get this dependency, along with others, installed but haven't been successful.
I have tried to install python packages by modifying my scheduler and webserver deployments to do a pip install of my dependencies as part of an initContainer. When I do this, the DAG remains broken as it is unable to find the needed packages. When I open a shell to my pod I can see that the dependencies have not been installed (I check using pip list). I have also verified that there aren't other python/pip versions installed.
I have also tried to install the dependencies by running a pip install when I open a shell to my pod. This way is successful in installing the dependency in the correct place and also makes it available. However, instead of the webserver UI showing that my DAG is broken, I get the this dag isn't available in the webserver dagbag object message.
I would expect that running pip install as part of my initContainer or container would makes these dependencies available in my pod. However, this isn't the case. It's as if pip install runs without any issues, but by the time my pods are fully set up the python packages are nowhere to be found
I forgot to say that I have found a way to make it work, but it feels somewhat hacky and like there should be a better way
- If I open a shell to my webserver container and install the needed dependencies and then open a shell to my scheduler and do the same thing, the dependencies are found and the DAG works.
The init container is a separate docker instance. Unless you rig up some sort of shared storage for your python libraries (which is quite dubious) any pip installs in the init container won't impact the running container of the pod.
I see two options:
1) Modify the docker image that you're using to include the packages you need
2) Prepend a pip install to the command being run in the pod. It's not uncommon to string together a few commands with && between them, in order to execute a sequence of operations in a starting pod.
I would recommend updating your Airflow Docker image to include the libraries you need.
If you plan to use lots of different libraries for specific DAGs then it may be worth create multiple Docker images and then reference them at a task level.
MyOperator(...,
executor_config={
"KubernetesExecutor":
{"image": "myCustomDockerImage"}
}
)
Reference: baseoperator.py