Use own packages (Artifact) in Azure function? - python

I want to use a packages deploy on azure artifact in an azure function
locally it was simple : just update the pip.ini, and the installation from requirements works great, I can launch my azure function locally, all is working
But how can I do it when I deploy it? maybe I need to put a pip.ini somewhere in my main folder?
Thanks

I finally find the solution :
go to your azure function, and open the command
here launch the different command :
mkdir pipconfig
cd pipconfig
now right your pip.ini with :
echo "[global]" > pip.ini
echo "extra-index-url=https://XXXXX" >> pip.ini
with the last url link to your artifact
now you have created your pip.ini in your azure function, go to your environement variable and create :
PIP_CONFIG_FILE with value /home/pipconfig/pip.ini
and restart your function : you can publish as always and you can import your private artifact
hope it will help other people

From your Function in the Azure Portal, navigate to its configuration blade. Then add under the 'Application Settings` tab click 'New Application Setting'. Provide the below as the key:
PIP_EXTRA_INDEX_URL
With the value set as your URL you want to use instead.
Source: https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python#custom-dependencies
Any pip flag can be set as an environment variable, example
--trusted-host
can be set as
PIP_TRUSTED_HOST
Just prefix PIP_ then the flag in capitals with - changed to _

Since you have generated requirements.txt file and it includes all of the info of the packages in your function project. You just need to deploy your function project(with requirements.txt) to azure. It will install the packages according to the requirements.txt automatically. For more information about deploy the python function to azure, you can refer to this tutorial.
Update:
As you mentioned your package is not a public package in your comments. You can try to use this command below:
func azure functionapp publish <APP_NAME> --build local
This command will build your project locally and then deploy it to azure.(But I'm not sure if this command can work fine because it also read from the requirements.txt file)
If the "build local" command doesn't work, you need to use docker, please refer to the steps in below screenshot:
Here is a tutorial for further information about the steps above.

Related

pip install azure-functions in azure pipeline fails with pip authenticate task

I am building a CI/CD azure pipeline to build and publish an azure function from a DevOps repo to Azure. The function in question uses a custom SDK stored as a Python package Artifact in an organisation scoped feed.
If I use a pip authenticate task to be able to access the SDK, the task passes but the pipeline then crashes when installing the requirements.txt. Strangely, before we get to the SDK, there is an error installing the azure-functions package. If I remove the SDK requirement and the pip authenticate task this error does not occur however. So something about the authenticate task means the agent cannot access azure-functions.
Additionally, if I swap the order of 'azure-functions' and 'CustomSDK' in the requierments.txt, the agent is still unable to install the SDK artifact so something must be wrong with the authentication task:
steps:
- task: PipAuthenticate#1
displayName: 'Pip Authenticate'
inputs:
artifactFeeds: <organisation-scoped-feed>
pythonDownloadServiceConnections: <service-connection-to-SDK-URL>
Why can I not download these packages?
This was due to confusion around the extra index url. In order to access both PyPI and the artifact feed, the following settings need to be set:
- task: PipAuthenticate#1
displayName: 'Pip Authenticate'
inputs:
pythonDownloadServiceConnections: <service-connection-to-SDK-Feed>
onlyAddExtraIndex: true
This way pip will consult PyPI first, and then the artifact feed.
Try running the function while the _init_.py file is active on the screen.
If you're just trying out the Quickstart, you shouldn't need to change anything in the function.json file. When you start debugging, make sure you're looking at the _init_.py file.
When you run the trigger, make sure you're on the _init_ .py file. Otherwise, VS Code will try to run the current active window's file.

Google Cloud Functions - How to import a Python package (via PIP) from a GCP Repository in another project?

I need to have private Python packages in GCP usable in multiple projects. I haven't tried the Artifact Registry since that's still in alpha, so right now I've been trying with simple repositories, but I'm open to alternatives.
I have a Python package source code in a GCP Repository in Project A, and I have a cloud function in a repository also in Project A. In this cloud function I import the mentioned package by adding git+https://source.developers.google.com/p/project-a/r/my-python package in my requirements.txt file.
If I deploy this cloud function in Project A via gcloud functions in my terminal, specifying --source=https://source.developers.google.com/projects/project-a/repos/my-cloud-function and --project=project-a, it works fine, and the function can successfully import the elements from the package when I call it, but if I deploy this function in Project B instead, I get the following error:
Deploying function (may take a while - up to 2 minutes)...failed.
ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Build failed: `pip_download_wheels` had stderr output:
Running command git clone -q https://source.developers.google.com/p/project-a/r/my-python-package /tmp/pip-req-build-f_bcp4y9
remote: PERMISSION_DENIED: The caller does not have permission
remote: [type.googleapis.com/google.rpc.RequestInfo]
remote: request_id: "abe4(...)"
fatal: unable to access 'https://source.developers.google.com/p/project-a/r/my-python-package/': The requested URL returned error: 403
ERROR: Command errored out with exit status 128: git clone -q https://source.developers.google.com/p/project-a/r/my-python-package /tmp/pip-req-build-f_bcp4y9 Check the logs for full command output.
This seems like a permissions issue. However, if I remove the package dependency from requirements.txt, it deploys fine, which means that Project B indeed has access to repos from Project A, so it seems like a issue inside Pip. However, Pip has no problems if I deploy to Project A, so I'm a little lost.
Many thanks in advance.
Artifact Registry is GA and no longer on Alpha/Beta since last year.
I replicated your issue. The error is indeed due to permissions, it didn't happened on the deployment when you remove the line on the requirements.txt, probably because the credentials had access to both projects.
In order to make the deployment correct you have to add the permissions on the repository to the service account that makes the deployment (which is the CF service account) that can be found on Cloud Functions - (select your Cloud Function) - Details, it should be something like project#appspot.gserviceaccount.com
Once you have located the service account add it to the Cloud Repository by clicking on Settings - Permissions and add it with at least the Source Repository Reader role

How to specify authentication for Pip Project setup pip with extra-index-url in pip.ini (Windows) or pip.conf (Mac/Linux) on azure pipelines/artifacts

Azure Artifacts allows posting a module to an Artifactory that can then be installed by using pip by setting extra-index-url in pip.ini (Windows) or pip.conf (Mac/Linux)
However, when using pip install, the system is asking for a user/password
Is it possible to setup this inside pip.conf and / or even better use .ssh signatures?
I was facing the same issue, a workaround solution that worked for me. To bypass the entire process Lance Li-MSFT mentioned:
It will ask your credentials and keep it in local cache, and it won't ask for user and password again if everything is ok
)
In the pip.ini / pip.conf file, add:
[global]
extra-index-url=https://<Personal Access Token>#pkgs.dev.azure.com/<Organization Name>/_packaging/<Feed Name>/pypi/simple/
This will be useful if you are in an environment where you can't do the first interactive login (Example Use-Case: Setting up an Azure Databricks from Azure Machine Learning Workspace and installing required packages).
Is it possible to setup this inside pip.conf and / or even better use
.ssh signatures?
What you met is expected behavior if it's the first time that you try to connect to the Azure Devops feed.
It will ask your credentials and keep it in local cache, and it won't ask for user and password again if everything is ok. Check my log:
We should note:
1.The Python Credential Provider is an artifacts-keyring package. It's used to keep the credentials instead of other options like pip.conf or .ssh.
2.What it asks for is a PAT. For me, I enter the pat in both User and Password inputs.
3.If you still need to enter the password every time when connecting to the feed, there must be something wrong with your Python Credential Provider(artifacts-keyring) package. Make sure you install this package successfully before running the pip install command.
4.There're two options(It seems you're using option2) to connect to the feed, they both need the artifacts keyring package to save the credentials. For me in windows environemnt, it's easy to install that package. But if you're in Linux environment, you should check the step4 in Get Tools button carefully:
Here's the link of prerequisites above.
Hope all above helps :)

How do you run InstaBot.py on OpenShift?

To run InstaBot locally, you just clone the repo, install the requirements.txt, put in your login credentials in example.py, and run python example.py. I do not know how this translates to OpenShift.
Let's say you push your code to your own GitHub repo with the login credentials in environment variables (in an git ignored file). You can set environment variables on the OpenShift dashboard, but where's the part where you specify python example.py?
For OpenShift, if example.py is a self contained Python web application, then you would need to rename it as app.py, or add a .s2i/environment file to your repo and in it add:
APP_FILE=example.py
The script should then ensure it is listening on all interfaces, ie., 0.0.0.0 and not just localhost. It also needs to use port 8080.
With that done, you can then use Python S2I builder process in OpenShift to deploy it. The packages listed in requirements.txt will be automatically installed for you.
if not familiar with OpenShift, you might consider reading:
https://www.openshift.com/deploying-to-openshift/
It is a free download.
For details on the Python S2I builder and what environment variables you can set to customise it, see:
https://github.com/sclorg/s2i-python-container/tree/master/3.6

Openshift: how to install python modules from private repository?

I would like to be able to install python packages to openshift, but those packages live in my private repositories, on bitbucket.
How can I create a SSH key for Openshift, and how do I make Openshift use it when installing packages? (after adding the corresponding public key to bitbucket as a Deploy Key)
What I've tried:
I used ssh-keygen to create a key on ~/.openshift_ssh/. It was created, but I'm not sure it is being used.
I also tried adding the publick key on <jenkins_dir>/app-root/data/.ssh/jenkins_id_rsa.pub, but the result is always the same. On the jenkins console output of the buildjob:
Doing git clone from ssh://git#bitbucket.org/jpimentel/zed.git to /tmp/easy_install-FpEKam/zed.git
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Is there anything that can be done?
So, at this time OpenShift does not offer a simple mechanism to do this. I would urge developers to look at https://gondor.io/support/dependencies/ for an effective solution to the problem.
That said, I was finally able to find an acceptable (at least, for me) workaround that works on both scalable and non scalable apps, with the following procedure:
create a deploy/ directory in the repository
put a copy of your private deploy key in said directory
create a bash script deploy/wrapper.sh that will run ssh with the provided key:
#!/bin/sh
ssh -o StrictHostKeyChecking=no -i $OPENSHIFT_REPO_DIR/deploy/id_deploy $#
note the option passed to disable host key check; cloning will fail without it.
install dependencies in the build hook (.openshift/action_hooks/build). In my case I added something like
echo "Cloning private repo..."
source $VIRTUAL_ENV/bin/activate
GIT_SSH=$OPENSHIFT_REPO_DIR/deploy/wrapper.sh pip install git+ssh://git#bitbucket.org/team/reponame.git#egg=reponame
commit everything and push it to openshift.
profit!
If you want to deploy your custom python modules then recommended way is to create a libs directory in the application source code root and push them to your application git repository. OpenShift will automatically pick your modules.

Categories

Resources