I have created a Github repo in which I keep all the code for my project.
The structure is:
myproject
\
- package api
- package database
- package feature
The api package is responsible for communicating with external apis like itunes api.
The database package is responsible for communicating with my database.
Finally the feature package is the actual project i am building.
Each package has its own setup.py.
I have three problems with this structure:
how can i add the dependencies of api and database in the feature setup.py?
How would you recommend i deploy this python code in Amazon? Using docker? Platter? Something else?
If we assume that more features will be added in the feature as separate packages. How can i deploy only a subset of the code in the server? Lets say package api along with another feature that uses it.
Let me know if my questions are not clear and I will refine them.
Related
I'm creating a flask based web app in which I need to version. However, my project depends upon multiple packages that I put in requirements.txt and use virtual env to create a working setup.
My question is: if between two versions, I want to change a package that introduced a breaking change - how should I be doing it? I'll be using nginx web server (if it can help)
E.g.
1/ I initially create v1 with package a. Package a helps me in generating response
2/ I version my api to v2 that upgrades a to a1 and a1 has breaking changes -- so, how should I maintain both the version in the same codebase when the depend upon non-compatible versions of packages?
PS: if above is solved, I can simply use blueprint to create versioning.
If you only can generate the desired views with different versions of dependencies, you should create two Flask applications.
In nginx you need to create different location sections for your two applications, depending on the path, ie route /v1 to application a, and /v2 to application b.
I'm using the command func azure functionapp publish to publish my python function app to Azure. As best I can tell, the command only packages up the source code and transfers it to Azure, then on a remote machine in Azure, the function app is "built" and deployed. The build phase includes the collection of dependencies from pypi. Is there a way to override where it looks for these dependencies? I'd like to point it to my ow pypi server, or alternatively, provide the wheels locally in my source tree and have it use those. I have a few questions/problems:
Are my assumptions correct?
Assuming they are, is this possible, and how?
I've tried a few things, read some docs, looked at the various --help options in the CLI tool, I've set up a pip.conf file that I've verified works for local pip usage, then on purpose "broken it" and tried to see if the publish would fail (it did not, so this leads me to believe it ignores pip.conf, or the build (and collection of dependencies happens on the remote end). I'm at a loss and any tips, pointers, or answers are appreciated!
You can add additional pip source to point to your own pypi server. Check https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python#remote-build-with-extra-index-url
Remote build with extra index URL:
When your packages are available from an accessible custom package index, use a remote build. Before publishing, make sure to create an app setting named PIP_EXTRA_INDEX_URL. The value for this setting is the URL of your custom package index. Using this setting tells the remote build to run pip install using the --extra-index-url option. To learn more, see the Python pip install documentation.
You can also use basic authentication credentials with your extra package index URLs. To learn more, see Basic authentication credentials in Python documentation.
And regarding referring local packages, that is also possible. Check https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python#install-local-packages
I hope both of your questions are answered now.
I noticed that the Flask tutorial involves use of pip. It looks like it's only used to create a wheel locally that will make setup on a server easier, but as a web dev newbie I'm curious: Does anyone actually go all the way to uploading their websites to a public repository like PyPI? What are the implications (security-related or otherwise) of doing so?
No, you should not upload private web projects to PyPI (the Python Package Index)! PyPI is for public, published projects intended to be shared.
Creating a package for your web project has advantages when deploying to your production servers, but that doesn't require that your package is available on PyPI. The pip command-line tool can find and install packages from other repositories, including private Git or Mercurial or SVN repositories or private package indexes too, as well as from the filesystem.
For the record: I've not bothered with creating packages for any of my recent deployed Flask projects I (helped) develop. These were put into production on cloud-hosted hardware and / or in Docker containers, directly from their Github repositories. Dependencies are installed with pip (as driven by the Pipenv tool in all cases), but the project code itself was just loaded directly from the checkout.
That said, if those projects start using continuous integration down the line, then it may make sense to use the resulting tested code, packaged as wheels, in production too. Publish those wheels to a private index or server; there are several projects and even a few SaaS services already available that let you manage a private package index.
If you do publish to PyPI, then anyone can download your package and analyse how your website works. It'd make it trivial for black-hat hackers to find and exploit security issues in your project that way.
As a fledgling Django developer, I was wondering if it was customary, or indeed possible, to create a site with Django then transfer the complete file structure to a different machine where it would "go live".
Thanks,
~Caitlin
You could use GIT or Mercurial - or other version control system. To put the site structure on a central server. After that you could deploy the site for example with fabric to multiple servers. For deployment process you should consider using for example virtualenv to isolate the project from global python packages and requirements.
Of course that's possible and in fact it's the only way to "go live". You don't want to develop in your live server, do you? And it's true for any platform, not just django.
If I understood your question correctly, you need a system to push your development code to live.
Use a version control system: git, svn, mercurial etc.
Identify environment specific code like setting/config files etc. and have separate instances of them for each environment.
Create a testing/staging/PP environment which has live data or live-like data and deploy your code there before pushing it to live.
To avoid any downtime during deployment process, usually a symbolic link is created which points to the existing code folder. When a new release is to be pushed, a new folder is created with new code, after all other dependencies are done (like setting and database changes) and the sym link is pointed to the new folder.
I am trying to create my first site in Django and as I'm looking for example apps out there to draw inspiration from, I constantly stumble upon a term called "reusable apps".
I understand the concept of an app that is reusable easy enough, but the means of reusing an app in Django are quite lost for me. Few questions that are bugging me in the whole business are:
What is the preferred way to re-use an existing Django app? Where do I put it and how do I reference it?
From what I understand, the recommendation is to put it on your "PYTHONPATH", but that breaks as soon as I need to deploy my app to a remote location that I have limited access to (e.g. on a hosting service).
So, if I develop my site on my local computer and intend to deploy it on an ISP where I only have ftp access, how do I re-use 3rd party Django apps so that if I deploy my site, the site keeps working (e.g. the only thing I can count on is that the service provider has Python 2.5 and Django 1.x installed)?
How do I organize my Django project so that I could easily deploy it along with all of the reusable apps I want to use?
In general, the only thing required to use a reusable app is to make sure it's on sys.path, so that you can import it from Python code. In most cases (if the author follows best practice), the reusable app tarball or bundle will contain a top-level directory with docs, a README, a setup.py, and then a subdirectory containing the actual app (see django-voting for an example; the app itself is in the "voting" subdirectory). This subdirectory is what needs to be placed in your Python path. Possible methods for doing that include:
running pip install appname, if the app has been uploaded to PyPI (these days most are)
installing the app with setup.py install (this has the same result as pip install appname, but requires that you first download and unpack the code yourself; pip will do that for you)
manually symlinking the code directory to your Python site-packages directory
using software like virtualenv to create a "virtual Python environment" that has its own site-packages directory, and then running setup.py install or pip install appname with that virtualenv active, or placing or symlinking the app in the virtualenv's site-packages (highly recommended over all the "global installation" options, if you value your future sanity)
placing the application in some directory where you intend to place various apps, and then adding that directory to the PYTHONPATH environment variable
You'll know you've got it in the right place if you can fire up a Python interpreter and "import voting" (for example) without getting an ImportError.
On a server where you have FTP access only, your only option is really the last one, and they have to set it up for you. If they claim to support Django they must provide some place where you can upload packages and they will be available for importing in Python. Without knowing details of your webhost, it's impossible to say how they structure that for you.
An old question, but here's what I do:
If you're using a version control system (VCS), I suggest putting all of the reusable apps and libraries (including django) that your software needs in the VCS. If you don't want to put them directly under your project root, you can modify settings.py to add their location to sys.path.
After that deployment is as simple as cloning or checking out the VCS repository to wherever you want to use it.
This has two added benefits:
Version mismatches; your software always uses the version that you tested it with, and not the version that was available at the time of deployment.
If multiple people work on the project, nobody else has to deal with installing the dependencies.
When it's time to update a component's version, update it in your VCS and then propagate the update to your deployments via it.