Github actions using locally licensed package - python

I'm trying to setup a CI-pipeline using Github Actions for my Python scripts, but these scripts make use of a package that is not publically available and it requires a local license. Because of this, I cannot use it in Github Actions. I also haven't found a way to run the Github Actions locally using the local package (local Github pipelines often try to download requirements from the internet). Does anyone have a solution for this? Is there perhaps a way to do this in VS Code?

Related

MWAA: Trouble installing Airflow Google Providers from requirements.txt

I'm trying to set up an MWAA Airflow 2.0 environment that integrates S3 and GCP's Pub/Sub. While we have no problems with the environment being initialized, we're having trouble installing some dependencies and importing Python packages -- specifically apache-airflow-providers-google==2.2.0.
We've followed all of the instructions based on the official MWAA Python documentation. We already included the constraints file as prescribed by AWS, activated all Airflow logging configs, and tested the requirements.txt file using the MWAA local runner. The result when updating our MWAA environment's requirements would always be like this
When testing using the MWAA local runner, we observed that using the requirements.txt file with the constraints still takes forever to resolve. Installation takes more than 10-30 minutes which is no good.
As an experiment, we tried using a version of the requirements.txt file that omits the constraints and pinned versioning. Doing so installs the packages successfully and we don't receive import errors anymore on both MWAA local runner and our MWAA environment itself. However, all of our dags will fail to run no matter what. Airflow logs are also inaccessible whenever we do this.
The team and I have been trying to get MWAA environments up and running for our different applications and ETL pipelines but we just can't seem to get things to work smoothly. Any help would be appreciated!
I'm having the same problems and in the end we had to refactor a lot of things to remove the dependence. It looks like is a problem with PIP resolver and apache-airflow-providers-google if you look the official page:
https://pypi.org/project/apache-airflow-providers-google/2.0.0rc1/
In the WORST case, you may need to use Airflow direct on EC2 from docker image and abandon MWAA :(
I've been through similar issues but with different packages. There are certain things you need to take into consideration when using MWAA. I didn't have any issue testing the packages on the local runner then on MWAA using a public VPC, I only had issues when using a private VPC as the web server doesn't have an internet connection, so the method to get the packages to MWAA is different.
Things to take into consideration:
The version of the packages; test on the local runner if you can first
Enable the logs; The scheduler and web server logs can show you issues, but also they may not. The reason for this is Fargate serving the images, will try to roll back to a working state rather than have MWAA be in a non-working state. So, you might not see what the error actually is, it may even look like there were no errors in certain scenarios.
Check dependencies; You may need to download a package with pip download <package>==version. There you can inspect the contents of the .whl file and see if there are any dependencies. You may have extra notes that can point you in the right direction. In one case, using the Slack package wouldn't work until I also added the http package, even though Airflow includes this package.
So, yes it's serverless, and you may have an easy time installing/setting MWAA up, but be prepared to do a little investigation if it doesn't work. I did contact AWS support, but managed to solve it myself in the end. Other than trying the obvious things, only those that use MWAA frequently and have faced varying scenarios will be of any assistance.

How to build/deploy a Python Azure Function App using an internal pypi

I'm using the command func azure functionapp publish to publish my python function app to Azure. As best I can tell, the command only packages up the source code and transfers it to Azure, then on a remote machine in Azure, the function app is "built" and deployed. The build phase includes the collection of dependencies from pypi. Is there a way to override where it looks for these dependencies? I'd like to point it to my ow pypi server, or alternatively, provide the wheels locally in my source tree and have it use those. I have a few questions/problems:
Are my assumptions correct?
Assuming they are, is this possible, and how?
I've tried a few things, read some docs, looked at the various --help options in the CLI tool, I've set up a pip.conf file that I've verified works for local pip usage, then on purpose "broken it" and tried to see if the publish would fail (it did not, so this leads me to believe it ignores pip.conf, or the build (and collection of dependencies happens on the remote end). I'm at a loss and any tips, pointers, or answers are appreciated!
You can add additional pip source to point to your own pypi server. Check https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python#remote-build-with-extra-index-url
Remote build with extra index URL:
When your packages are available from an accessible custom package index, use a remote build. Before publishing, make sure to create an app setting named PIP_EXTRA_INDEX_URL. The value for this setting is the URL of your custom package index. Using this setting tells the remote build to run pip install using the --extra-index-url option. To learn more, see the Python pip install documentation.
You can also use basic authentication credentials with your extra package index URLs. To learn more, see Basic authentication credentials in Python documentation.
And regarding referring local packages, that is also possible. Check https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python#install-local-packages
I hope both of your questions are answered now.

What are the risks/benefits of uploading my website to a public repository?

I noticed that the Flask tutorial involves use of pip. It looks like it's only used to create a wheel locally that will make setup on a server easier, but as a web dev newbie I'm curious: Does anyone actually go all the way to uploading their websites to a public repository like PyPI? What are the implications (security-related or otherwise) of doing so?
No, you should not upload private web projects to PyPI (the Python Package Index)! PyPI is for public, published projects intended to be shared.
Creating a package for your web project has advantages when deploying to your production servers, but that doesn't require that your package is available on PyPI. The pip command-line tool can find and install packages from other repositories, including private Git or Mercurial or SVN repositories or private package indexes too, as well as from the filesystem.
For the record: I've not bothered with creating packages for any of my recent deployed Flask projects I (helped) develop. These were put into production on cloud-hosted hardware and / or in Docker containers, directly from their Github repositories. Dependencies are installed with pip (as driven by the Pipenv tool in all cases), but the project code itself was just loaded directly from the checkout.
That said, if those projects start using continuous integration down the line, then it may make sense to use the resulting tested code, packaged as wheels, in production too. Publish those wheels to a private index or server; there are several projects and even a few SaaS services already available that let you manage a private package index.
If you do publish to PyPI, then anyone can download your package and analyse how your website works. It'd make it trivial for black-hat hackers to find and exploit security issues in your project that way.

How to setup PyCharm to develop AWS Lambda function on local machine?

I am looking forward to developing some AWS lambda functions in python using PyCharm. How can I setup my IDE to develop and test the function locally? Can experts guide how to set it up? Any links or relevant tutorials will be really helpful.
As of the announcement at the re:Invent 2018 keynote, Jetbrains now offers the AWS Toolkit which allows local and remote development of Lambda functions.
Despite some lingering issues it works quite well.
User exan has provided the link on AWS' site here
There is also a blog post when using PyCharm on MacOS
Toolkit page on Jetbrains website
UPDATE April 2019: Jetbrains has been very responsive and active in fixing any issues. Issues with credentials and templates seem resolved and it's quite a joy to work with.
Directly go through the page->
https://medium.com/#bezdelev/how-to-test-a-python-aws-lambda-function-locally-with-pycharm-run-configurations-6de8efc4b206
pip install python-lambda-local
python-lambda-local -f lambda_handler lambda_function.py event.json

How Can I deploy OVA file on Vsphere Client with python

I want to automate deploying OVA image on VSphere with python.
I looked up at some packages viz. Pysphere, psphere but didn't find direct method to do so.
is there any Library I'm missing or is there any other way to deploy OVA/OVF files/templates on VSphere with Python.
Pls help!!!
I have the same situation here and found that there is vSphere automation API here made in Python. Github clone here.
All you need to do is extract SDK and download deploy_ovf_template.py for usage here or from github clone here. This template will work with OVF, but since you want to work with OVA you'll need to do extra work and extract OVA (you'll get OVF and vmdk files).
For other scenarios, check PDF documentation here.
Be aware that this is supported 6.5>= vSphere
As far I know there are no appropriate api for deploying ovf template using python package. You can use ovftool, VMware OVF Tool is a command-line utility that allows you to import and export OVF packages to and from many VMware products.
download ovftool from vmware site https://my.vmware.com/web/vmware/details?productId=352&downloadGroup=OVFTOOL350
to install ovftool:-
sudo /bin/sh VMware-ovftool-3.5.0-1274719-lin.x86_64.bundle
to deploy ova image as template.
syntax:-
ovftool -dm=thick -ds=3par1 -n=abhi_vm /root/lab/extract/overcloud-esx-ovsvapp.ova vi://root:pwd#10.1.2**.**/datacenter/host/cluster
use os.system(ovftool_syntax) to use in your python script.

Categories

Resources