How do I run OpenERP on uWSGI?
I found this wsgi script online, but I'm not sure where to place it?
import openerp
try:
import uwsgi
uwsgi.port_fork_hook = openerp.wsgi.core.on_starting
except:
openerp.wsgi.core.on_starting()
# Equivalent of --load command-line option
openerp.conf.server_wide_modules = ['web']
# internal TODO: use openerp.conf.xxx when available
conf = openerp.tools.config
# Path to the OpenERP Addons repository (comma-separated for
# multiple locations)
conf['addons_path'] = '/home/openerp/addons/trunk,/home/openerp/web/trunk/addons'
# Optional database config if not using local socket
#conf['db_name'] = 'mycompany'
#conf['db_host'] = 'localhost'
#conf['db_user'] = 'foo'
#conf['db_port'] = 5432
#conf['db_password'] = 'secret'
# OpenERP Log Level
# DEBUG=10, DEBUG_RPC=8, DEBUG_RPC_ANSWER=6, DEBUG_SQL=5, INFO=20,
# WARNING=30, ERROR=40, CRITICAL=50
# conf['log_level'] = 20
# If --static-http-enable is used, path for the static web directory
#conf['static_http_document_root'] = '/var/www'
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
application = openerp.wsgi.core.application
I installed OpenERP in a virtual environment in /var/www/openerp/venv and I can run it by calling $ openerp-server.
Thanks in advance.
you can just put the script file in the same directory with the openerp-server.py file.
however when I test it it doesnot work since gunicorn cannot find the openerp in the
import openerp sentence. the reason is that openerp is not installed as a python module to the system with the installation procedures around.
I think it will work when you do a openerp install with the DEB package. (when you make such install you should disable the start script so it will just work from gunicorn.
let me also make a test install and share the result.
Related
I created a standalone python script to export my atlas layouts. Everything is working great except that the SVG symbols that I am using from the Resource Sharing plugin are just question marks, assuming that it is having trouble locating them. However, if I run the script via the startup.py in the QGIS3 folder everything works like expected. I would really like to avoid using this method though as it prevents you from using QGIS until the script finishes, which takes about 2 hours. I am hoping that I just need to add a simple environmental variable to my .bat file so that it can locate the Resource Sharing plugin. Thanks in advance for any help!
.bat file
#ECHO off
set OSGEO4W_ROOT=C:\OSGeo4W64
call "%OSGEO4W_ROOT%\bin\o4w_env.bat"
call "%OSGEO4W_ROOT%\bin\qt5_env.bat"
call "%OSGEO4W_ROOT%\bin\py3_env.bat"
path %OSGEO4W_ROOT%\apps\qgis\bin;%PATH%
set QGIS_PREFIX_PATH=%OSGEO4W_ROOT%\apps\qgis
set GDAL_FILENAME_IS_UTF8=YES
set VSI_CACHE=TRUE
set VSI_CACHE_SIZE=1000000
set QT_PLUGIN_PATH=%OSGEO4W_ROOT%\apps\qgis\qtplugins;%OSGEO4W_ROOT%\apps\qt5\plugins
SET PYCHARM="C:\Program Files\JetBrains\PyCharm 2019.2.3\bin\pycharm64.exe"
set PYTHONPATH=%OSGEO4W_ROOT%\apps\qgis\python
set PYTHONHOME=%OSGEO4W_ROOT%\apps\Python37
set PYTHONPATH=%OSGEO4W_ROOT%\apps\Python37\lib\site-packages;%PYTHONPATH%
set QT_QPA_PLATFORM_PLUGIN_PATH=%OSGEO4W_ROOT%\apps\Qt5\plugins\platforms
set QGIS_PREFIX_PATH=%OSGEO4W_ROOT%\apps\qgis
start "PyCharm aware of QGIS" /B %PYCHARM% %*
Python Script
from qgis.core import QgsApplication, QgsProject, QgsLayoutExporter
import os
import sys
def export_atlas(qgs_project_path, layout_name, outputs_folder):
# Open existing project
project = QgsProject.instance()
project.read(qgs_project_path)
print(f'Project in "{project.fileName()} loaded successfully')
# Open prepared layout that as atlas enabled and set
layout = project.layoutManager().layoutByName(layout_name)
# Export atlas
exporter = QgsLayoutExporter(layout)
settings = QgsLayoutExporter.PdfExportSettings()
exporter.exportToPdfs(layout.atlas(), outputs_folder, settings)
def run():
# Start a QGIS application without GUI
QgsApplication.setPrefixPath(r"C:\\OSGeo4W64\\apps\\qgis", True)
qgs = QgsApplication([], False)
qgs.initQgis()
sys.path.append(r'C:\OSGeo4W64\apps\qgis\python\plugins')
project_path = [project_path]
output_folder = [export_location]
layout_name_portrait = [portrait layout name]
layout_name_landscape = [landscape laytout name]
export_atlas(project_path, layout_name_portrait, output_folder)
export_atlas(project_path, layout_name_landscape, output_folder)
# Close the QGIS application
qgs.exitQgis()
run()
I guess that it might have something to do with the setting svg/searchPathsForSVG.
QgsSettings().setValue('svg/searchPathsForSVG', <your path>)
I have been searching since a couple of days for a solution without success.
We have a windows service build to copy some files from one location to another one.
So I build the code shown below with Python 3.7.
The full coding can be found on Github.
When I run the service using python all is working fine, I can install the service and also start the service.
This using commands:
Install the service:
python jis53_backup.py install
Run the service:
python jis53_backup.py start
When I now compile this code using pyinstaller with command:
pyinstaller -F --hidden-import=win32timezone jis53_backup.py
After the exe is created, I can install the service but when trying to start the service I get the error:
Error starting service: The service did not respond to the start or
control request in a timely fashion
I have gone through multiple posts on Stackoverflow and on Google related to this error however, without success. I don't have the option to install the python 3.7 programs on the PC's that would need to run this service. That's why we are trying to get a .exe build.
I have made sure to have the path updated according to the information that I found in the different questions.
Image of path definitions:
I also copied the pywintypes37.dll file.
From -> Python37\Lib\site-packages\pywin32_system32
To -> Python37\Lib\site-packages\win32
Does anyone have any other suggestions on how to get this working?
'''
Windows service to copy a file from one location to another
at a certain interval.
'''
import sys
import time
from distutils.dir_util import copy_tree
import servicemanager
import win32serviceutil
import win32service
from HelperModules.CheckFileExistance import check_folder_exists, create_folder
from HelperModules.ReadConfig import (check_config_file_exists,
create_config_file, read_config_file)
from ServiceBaseClass.SMWinService import SMWinservice
sys.path += ['filecopy_service/ServiceBaseClass',
'filecopy_service/HelperModules']
class Jis53Backup(SMWinservice):
_svc_name_ = "Jis53Backup"
_svc_display_name_ = "JIS53 backup copy"
_svc_description_ = "Service to copy files from server to local drive"
def start(self):
self.conf = read_config_file()
if not check_folder_exists(self.conf['dest']):
create_folder(self.conf['dest'])
self.isrunning = True
def stop(self):
self.isrunning = False
def main(self):
self.ReportServiceStatus(win32service.SERVICE_RUNNING)
while self.isrunning:
# Copy the files from the server to a local folder
# TODO: build function to trigger only when a file is changed.
copy_tree(self.conf['origin'], self.conf['dest'], update=1)
time.sleep(30)
if __name__ == '__main__':
if sys.argv[1] == 'install':
if not check_config_file_exists():
create_config_file()
if len(sys.argv) == 1:
servicemanager.Initialize()
servicemanager.PrepareToHostSingle(Jis53Backup)
servicemanager.StartServiceCtrlDispatcher()
else:
win32serviceutil.HandleCommandLine(Jis53Backup)
I was also facing this issue after compiling using pyinstaller. For me, the issue was that I was using the paths to configs and logs file in dynamic way, for ex:
curr_path = os.path.dirname(os.path.abspath(__file__))
configs_path = os.path.join(curr_path, 'configs', 'app_config.json')
opc_configs_path = os.path.join(curr_path, 'configs', 'opc.json')
log_file_path = os.path.join(curr_path, 'logs', 'application.log')
This was working fine when I was starting the service using python service.py install/start. But after compiling it using pyinstaller, it always gave me error of not starting in timely fashion.
To resolve this, I made all the dynamic paths to static, for ex:
configs_path = 'C:\\Program Files (x86)\\ScantechOPC\\configs\\app_config.json'
opc_configs_path = 'C:\\Program Files (x86)\\ScantechOPC\\configs\\opc.json'
debug_file = 'C:\\Program Files (x86)\\ScantechOPC\\logs\\application.log'
After compiling via pyinstaller, it is now working fine without any error. Looks like when we do dynamic path, it doesn't get the actual path to files and thus it gives error.
Hope this solves your problem too. Thanks
I'm setting up my first Django VM with Vanguard following this guide. Ran into an error:
==> djangovm: [2014-10-20T15:04:09+02:00] ERROR: Running exception handlers
==> djangovm: [2014-10-20T15:04:09+02:00] ERROR: Exception handlers complete
==> djangovm: [2014-10-20T15:04:09+02:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
==> djangovm: [2014-10-20T15:04:09+02:00] FATAL: NameError: uninitialized constant Chef::Resource::LWRPBase
Found the answer here: Vagrant & Chef: uninitialized constant Chef::Resource::LWRPBase which says to refer to this answer: How to control the version of Chef that Vagrant uses to provision VMs?
I'm just not sure where do I put:
config.omnibus.chef_version = :latest
in my Vagrantfile?
My Vagrant file looks like that tutorial link exactly at the moment:
Vagrant::Config.run do |config|
config.vm.define :djangovm do |django_config|
# Every Vagrant virtual environment requires a box to build off of.
django_config.vm.box = "lucid64"
# The url from where the 'config.vm.box' box will be fetched if it
# doesn't already exist on the user's system.
django_config.vm.box_url = "http://files.vagrantup.com/lucid64.box"
# Forward a port from the guest to the host, which allows for outside
# computers to access the VM, whereas host only networking does not.
django_config.vm.forward_port 80, 8080
django_config.vm.forward_port 8000, 8001
# Enable provisioning with chef solo, specifying a cookbooks path (relative
# to this Vagrantfile), and adding some recipes and/or roles.
#
django_config.vm.provision :chef_solo do |chef|
chef.cookbooks_path = "cookbooks"
chef.add_recipe "apt"
chef.add_recipe "apache2::mod_wsgi"
chef.add_recipe "build-essential"
chef.add_recipe "git"
chef.add_recipe "vim"
#
# # You may also specify custom JSON attributes:
# chef.json = { :mysql_password => "foo" }
end
end
end
I tried adding it in like so:
# The url from where the 'config.vm.box' box will be fetched if it
# doesn't already exist on the user's system.
django_config.vm.box_url = "http://files.vagrantup.com/lucid64.box"
# This gets the latest version of Omnibus
config.omnibus.chef_version = :latest
Then did Vagrant Destroy then Vagrant Up and still got the same error.
You can put it inside the :djangovm definition as you tried, but there you need to use
django_config.omnibus.chef_version = :latest
Of course you also have to install the vagrant-omnibus too:
vagrant plugin install vagrant-omnibus
Btw, it is safer to lock down the Chef version to a known version so your provision won't explode when Chef releases new major versions. So for example:
django_config.omnibus.chef_version = "11.6.4"
I have some usernames, passwords and other configurations setup in the environment variables of an ec2 instance. I have created a virtualenv setup and active where I run my django server. In the settings file of that django server I access the environment variables as os.environ['variable'].
Outside the virtualenv the site could access those variables fine. When I run printenv, I see all the variables and the values.
However, the server cannot find them and is throwing key errors as a result when I call os.environ on them.
setup = ec2 instance - mod_wsgi - nginx - apache
UPDATE
This started working by setting the variables in django.wsgi in the following way.
os.environ['SQL_PASSWORD'] = 'password'
That alone stopped working once I upgraded to the new ec2 hardware. I am not sure how that was related.
Now, what worked finally was setting the variables individually using SetEnv in the apache config file. Still not optimal because I have to keep the config file checked out on the production machine but it unblocks me.
SetEnv SQL_PASSWORD password
I use nginx and uwsgi on Ec2 and have a chef recipe that builds out my servers. To solve this problem, in chef I have roles that contain credentials not stored in the app's repo.
file at /home/user/web/site/environment that chef creates based on roles.
MYSQL_DATABASE=databasename
MYSQL_USER=databaseuser
MYSQL_PASSWORD=databasepassword
MYSQL_HOST=databaseip
MYSQL_PORT=3306
REDIS_HOST=redishost
REDIS_PASSWORD=redispassword
REDIS_PORT=6379
REDIS_DB=0
MEDIA_ROOT=/home/user/web/site/media
STATIC_ROOT=/home/user/web/site/static
at the beginning of my production/staging/etc settings file I have the following block to read the environment file
import os, re
try:
dirname = os.path.dirname(os.path.abspath(__file__))
# my environment file is always in the same place relative to my project's settings file
env_path = os.path.normpath(os.path.join(dirname, '..', '..', '..', '..', 'environment'))
with open(env_path) as f:
content = f.read()
for line in content.splitlines():
m1 = re.match(r'\A([A-Za-z_0-9]+)=(.*)\Z', line)
if m1:
key, val = m1.group(1), m1.group(2)
m2 = re.match(r"\A'(.*)'\Z", val)
if m2:
val = m2.group(1)
m3 = re.match(r'\A"(.*)"\Z', val)
if m3:
val = re.sub(r'\\(.)', r'\1', m3.group(1))
os.environ.setdefault(key, val)
except IOError:
pass
Voila, settings are source controlled in a separately managed repo (chef's) and loaded into the app. This may not be the BEST way, but it is fairly secure b/c of how permissions are locked down on the chef repo and the target servers and easy to deploy.
I'm using the standard routing module with pylons to try and setup a default route for the home page of my website.
I've followed the instructions in the docs and here http://routes.groovie.org/recipes.html but when I try http://127.0.0.1:5000/ I just get the 'Welcome to Pylons' default page.
My config/routing.py file looks like this
from pylons import config
from routes import Mapper
def make_map():
"""Create, configure and return the routes Mapper"""
map = Mapper(directory=config['pylons.paths']['controllers'],
always_scan=config['debug'])
map.minimization = False
map.connect('/error/{action}', controller='error')
map.connect('/error/{action}/{id}', controller='error')
# CUSTOM ROUTES HERE
map.connect( '', controller='main', action='index' )
map.connect('/{controller}/{action}')
map.connect('/{controller}/{action}/{id}')
return map
I've also tried
map.connect( '/', controller='main', action='index' )
and (using http://127.0.0.1:5000/homepage/)
map.connect( 'homepage', controller='main', action='index' )
But nothing works at all. I know its reloading my config file as I used
paster serve --reload development.ini
to start the server
system info
$ paster --version
PasteScript 1.7.3 from /Library/Python/2.5/site-packages/PasteScript-1.7.3-py2.5.egg (python 2.5.1 (r251:54863, Feb 6 2009, 19:02:12))
You have to delete the static page (myapp/public/index.html). Static
files take priority due to the Cascade configuration at the end of
middleware.py.