I am stuck and desperate.
Is it possible to serve multiple python web applications on multiple different domains using virtualhost on cherrypy? Hmm wait... I will answer myself: Yes, it is possible. With virtual host dispatcher it is possible, until i require this:
I need to use more instances of same application but in different versions. This means that I need to somehow split the namespace for the python import for these applications.
Example:
I have application MyApp and there are two versions of it. I have got two domains app1.com and app2.com.
When I access app1.com I would like to get the application MyApp in version 1. When I access app2.com, it should be MyApp in version 2.
I am now using the VirtualHostDispatcher of cherrypy 3.2 and the problem is that, when I use import from the methods of MyApp version 1 and the MyApp version 2 has been loaded before, python will use the already imported module (because of module cache).
Yes.. it is possible to wrap the import and clean the python module cache everytime (i use this for the top level application object instantiation), but it seems quite unclean for me... And I think that it is also inefficient...
So, what do you recommend me?
I was thinking about using apache2 and cherrypy using Mod_WSGI, but it seems that this does not solve the import problem, becuase there is still one python process for all apps togetger.
Maybe, I am thinking about the whole problem completely wrong and I will need to re-think it. I am opened for every idea or tip. Only limitation is that i want to use Python 3. Anything else is still opened for discussion :-)
Thank you for every response!
Apache/mod_wsgi can do what is required. Each mounted web application under mod_wsgi will run in a distinct sub interpreter in the same process so can be using different code bases. Better still, you use daemon mode of mod_wsgi and delegate each web application to distinct process so not risk of them interfering with each other.
what about creating myapp_selector module that does smth like that:
def application(env, start_response):
import myapp1
import myapp2
if env['SERVER_NAME'] == 'myapp1.com':
myapp = myapp1
elif env['SERVER_NAME'] == 'myapp2.com':
myapp = myapp2
# ...
return myapp.process_request()
Related
I'm working on an embedded system project where my dev setup is different than my prod.
The differences include variables and packages imports.
What is the best way to structure the config files for python3 application where dev and prod setups are different?
prod: My device exchange messages (using pyserial) with an electronic system and also communicates with a server.
dev: I use a fake and fixed response from a function to mock both the electronic and server responses.
Even if the functions that I mock are essential in prod they are less in dev.
I can mock them because the most important part of this project are the functions that use and treat them.
So, there are packages imports and function calls that do not make sense and introduce errors in dev mode.
Every time I need to switch from one to another I need to change a good amount of the code and some times there are errors introduced. I know this is really (💩) not the best approach and I wanted to know what are the best practices.
The Closest Solution Found
Here there is a good solution to set up different variables for each environment. I was hoping for something similar but for projects that require different packages import for different environments.
My Setup
Basic workflow:
A task thread is executed each second
module_1 do work and call module_2
module_2 do work and call module_3
module_3 do work and send back a response
Basic folder structure:
root
main
config.py
/config
prod
dev
/mod_1
/mod_2
/mod_3
/replace_imports
module_1 and module_3 use, each one, a specific package for prod and must be replaced by a dev function
What do I have:
# config.py
if os.environ["app"] == "dev":
import * from root.config.dev
if os.environ["app"] == "prod":
import * from root.config.prod
# config/prod.py
import _3rd_party_module_alpha
import _3rd_party_module_beta
...
obj_alpha = _3rd_party_module_alpha()
func_beta = _3rd_party_module_beta()
# config/dev.py
import * from root.replace_imports
# replace_imports.py
obj_alpha = fake_3rd_party_module_alpha()
func_beta = fake_3rd_party_module_beta()
You really should not have code changes between a dev at point X, and pushing into QA/CI , then prod at point X. Your dev and prod code can be expected to be different at different stages, of course, and version control is key. But moving to production should not require code changes, just config changes.
Environment variables (see 12 factor app stuff) can help, but sometimes config is in code, for example in Django setting files.
In environments like Django where "it points to" a settings file, I've seen this kinda of stuff:
base_settings.py:
common config
dev_settings.py:
#top of file
import * from base_settings
... dev specifics, including overrides of base...
edit: I am well aware of issues with import *. First, this is a special case, for configurations, where you want to import everything. Second, the real issue with import * is that it clobbers the current namespace. That import is right at the top so that won't happen. Linters aside, and they can be suppressed for just that line, the leftover issue is that you may not always know where a variable magically came from, unless you look in base.
prod_settings.py:
import * from base_settings
...production specifics, including overrides of base...
Advanced users of webpack configuration files (and those are brutal) do the same thing, i.e. use a base.js then import that into a dev.js and prod.js.
The key here is to have as much as possible in base, possibly with the help of environment variables (be careful not to over-rely on those, no ones likes apps with dozens of environment variable settings). Then dev and prod are basically about keys, paths, urls, ports, that kind of stuff. Make sure to keep secrets out of them however, because they gravitate there naturally but have no business being under version control.
re. your current code
appname = os.getenv("app")
if appname == "dev":
#outside of base=>dev/prod merges, at top of "in-config" namespaces,
#I would avoid `import *` here and elsewhere
import root.config.dev as config
elif appname == "prod":
import root.config.prod as config
else:
raise ValueError(f"environment variable $app should have been either `dev` or `prod`. got:{appname}:")
Last, if you are stuck without an "it points to Python settings file" mechanism, similar to that found in Django, you can (probably) roll your own by storing the config module path (xxx.prod,xxx.dev) in an environment variable and then using a dynamic import. Mind you, your current code largely does that already except that you can't experiment with/add other settings files.
Don't worry if you don't get it right right away, and don't over-design up front - it takes a while to find what works best for you/your app.
Pipenv was specially created for these things and more
Use git branches (or mercurial or whatever version control system you're using - you are using a vcs, are you ?) and virtualenvs. That's exactly what they are for. Your config files should only be used for stuff like db connections identifiers, api keys etc.
Some time ago I created a script using Python, the script will execute some actions in an instance based on a configuration file.
This is the issue, I created 2 configuration files.
Config.py
instance= <Production url>
Value1= A
Value2= B
...
TestConfig.py
instance= <Development url>
Value1= C
Value2= D
...
So when I want the script to execute the tasks in a development instance to do tests, I just import the TestConfig.py instead of the Config.py.
Main.py
# from Config import *
from TestConfig import *
The problem comes when I update the script using git. If I want to run the script in development I have to modify the file manually, this means that I will have uncommited changes in the server.
Doing this change takes about 1 min of my time but I feel like I'm doing something wrong.
Do you know if there's a standard or right way to accomplish this kind of tasks?
Use that:
try:
from TestConfig import *
except ImportError:
from Config import *
On production, remove TestConfig.py
Export environment variables on your machines, and chose the settings based on that environment variable.
I think Django addresses this issue best with local_settings.py. Based on this approach. at the end of all your imports (after from config import *), just add:
# Add this at the end of all imports
# This is safe to commit and even push to production so long as you don't have local_config in your production server
try:
from local_config import *
except ImportError:
pass
And create a local_config.py per machine. What this will do is import everything from config, and then again from local_config, overriding global configuration settings, should they have the same name as the settings inside config.
The other answers here offer perfectly fine solutions if you really want to differentiate between production and test environments in your script. I would advocate for a different approach, however: to properly test your code, you should create an entirely separate test environment and run your code there, without any changes (or changes to the config files).
I can't make any specific suggestions for how to go about this since I don't know what the script does. In general though, you should try to create a sandbox that spoofs your production environment and is completely isolated. You can create a wrapper script that will run your code in the sandbox and modify the inputs and outputs as necessary to make your code interact with the test environment instead of the production environment. This wrapper is where you should be choosing which environment the code runs in and which config files it uses.
This approach abstracts the testing away from the code itself, making both easier to maintain. Designing for test is a reasonable approach for hardware, where you are stuck with the hardware you have after fabrication, but it makes less sense for software, where wrappers and spoofed data are easier to manage. You shouldn't have to modify your production code base just to handle testing.
It also entirely elimiates the chance that you'll forget to change something when you want to switch between testing and deployment to production.
I'm writing a web-application in Python, I haven't decided if I want to use Flask, web.py or something else yet, and I want to be able to do profile on the live application.
There seems to be very little information on how you go about implementing the instrumentation to do performance measurement, short of doing a lot of print datetime.now() everywhere.
What is the best way of going about instrumenting your Python application to allow good measurements to be made. I guess I'm looking for something similar to the Stackoverflow teams mvc-mini-profiler.
You could simply run cProfile tool that comes with Python:
python -m cProfile script.py
Of course, you would have to create the script.py file that would execute the parts of the code that you want to test. If you had some unit tests, you could also use that.
Or you couse use:
import cProfile
cProfile.run('foo()')
to profile it from foo entry point.
Amir Salihefendic wrote a short (150 LOC) RequestProfiler, which is described in this blog post:
http://amix.dk/blog/post/19359
I haven't tried it, but since it is a WSGI middleware, it should be somewhat pluggable.
You can just use a general purpose web application performance tool, such as httpperf. This works using an external client and works with any framework since it works against a standard interface (HTTP). Therefore it tests the full stack performance.
Use New Relic's Free monitoring system. You simply install an agent on the server and point to your flask init.py file. Once you run the application with proper agent setup, you will start seeing application metrics in see New Relic's online dashboard called APM.
By default it will show you graphs of your application's throughput (QPS/RPM), app response time, top transactions, error rate, error stack trace if any(eg for 500 error), calls to external services etc. In addition you can monitor your System stats too.
I'm serving a Django app behind IIS 6. I'm wondering if I can restart IIS 6 within Python/Django and what one of the best ways to do would be.
Help would be great!
Besides what's already suggested, you can also use WMI via either the Win32_Service or the IIsWebService class, which inherits from it. There is a Python WMI wrapper available, which is based on pywin32.
UPDATE: A quick test of the following worked for me.
import wmi
c = wmi.WMI()
for service in c.Win32_Service(Name="W3SVC"):
result, = service.StopService()
I didn't test the next piece of code, but something like this should also work:
for service in c.IIsWebService():
result, = service.StopService()
You can see the documentation for the return values from the StopService and StartService methods.
The following post shows how to control Windows services from Python: http://fuzzytolerance.info/code/using-python-to-manage-windows-services/
You should be able that to restart the IIS web publishing service (known as 'w3svc')
I think that you can execute an iisreset via a commandline. I've never tried that with Django but it should work and be quite simple to implement.
I created a nice RSS application in Python. It took a while and most of the code just does heavy work, like formatting XML, downloading feeds, etc. To the application itself requires very little user interaction, just a initial list of RSS feeds and some parameters.
What would be really nice, is if I was able to have a web front-end which allowed me to have the user edit their feeds and parameters, then they could click a create button and it runs.
I don't really want to have to rewrite the thing in a web framework. Is there anything that will allow me to build a nice front-end allowing it to interact with the normal Python underneath?
It depends on your needs, free time, etc.
I recommend two solutions:
Django - a very rich framework which allows you to create full featured sites using only accessible components (in most cases they are good enough)
http://werkzeug.pocoo.org/ - collections of tools if you want to have possibility to control everything from the low level
web.py is a very lightweight 'library' (not framework) that you can put as a front end to your app. Just import your app within the main controller and use it as you would.
The Python standard library also includes a builting SimpleHTTPServer module which might be what you need to create a front end for your app.
You may also either deploy your Python code as CGI script on a webserver of your choice, e.g. Tomcat:
The CGI (Common Gateway Interface) defines a way for a web server to
interact with external content-generating programs, which are often
referred to as CGI programs or CGI scripts.
According to a Qura-question this might be appropriate only for small projects, but I do not say anything wrong with that since it worked well for me for perl-scripts. The same source suggests a Python WSGI (web-service gateway) service like uwsgi another service dedicated to running Python code.
Last but not least, there is the solution to encapsulate your Python into Java-code: I stumbled upon the Quora-question "How do I run Java and Python in Tomcat?" which refered to using Jython and plyJy, the latter project is not alive anymore. However, there is also a related question on the topic of bundling Python and Java..