Where does "config.omnibus.chef_version = :latest" go in my Vagrantfile? - python

I'm setting up my first Django VM with Vanguard following this guide. Ran into an error:
==> djangovm: [2014-10-20T15:04:09+02:00] ERROR: Running exception handlers
==> djangovm: [2014-10-20T15:04:09+02:00] ERROR: Exception handlers complete
==> djangovm: [2014-10-20T15:04:09+02:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
==> djangovm: [2014-10-20T15:04:09+02:00] FATAL: NameError: uninitialized constant Chef::Resource::LWRPBase
Found the answer here: Vagrant & Chef: uninitialized constant Chef::Resource::LWRPBase which says to refer to this answer: How to control the version of Chef that Vagrant uses to provision VMs?
I'm just not sure where do I put:
config.omnibus.chef_version = :latest
in my Vagrantfile?
My Vagrant file looks like that tutorial link exactly at the moment:
Vagrant::Config.run do |config|
config.vm.define :djangovm do |django_config|
# Every Vagrant virtual environment requires a box to build off of.
django_config.vm.box = "lucid64"
# The url from where the 'config.vm.box' box will be fetched if it
# doesn't already exist on the user's system.
django_config.vm.box_url = "http://files.vagrantup.com/lucid64.box"
# Forward a port from the guest to the host, which allows for outside
# computers to access the VM, whereas host only networking does not.
django_config.vm.forward_port 80, 8080
django_config.vm.forward_port 8000, 8001
# Enable provisioning with chef solo, specifying a cookbooks path (relative
# to this Vagrantfile), and adding some recipes and/or roles.
#
django_config.vm.provision :chef_solo do |chef|
chef.cookbooks_path = "cookbooks"
chef.add_recipe "apt"
chef.add_recipe "apache2::mod_wsgi"
chef.add_recipe "build-essential"
chef.add_recipe "git"
chef.add_recipe "vim"
#
# # You may also specify custom JSON attributes:
# chef.json = { :mysql_password => "foo" }
end
end
end
I tried adding it in like so:
# The url from where the 'config.vm.box' box will be fetched if it
# doesn't already exist on the user's system.
django_config.vm.box_url = "http://files.vagrantup.com/lucid64.box"
# This gets the latest version of Omnibus
config.omnibus.chef_version = :latest
Then did Vagrant Destroy then Vagrant Up and still got the same error.

You can put it inside the :djangovm definition as you tried, but there you need to use
django_config.omnibus.chef_version = :latest
Of course you also have to install the vagrant-omnibus too:
vagrant plugin install vagrant-omnibus
Btw, it is safer to lock down the Chef version to a known version so your provision won't explode when Chef releases new major versions. So for example:
django_config.omnibus.chef_version = "11.6.4"

Related

how to override downloader directory with local directory in polyglot in python

I run polyglot sentiment detection. When I upload it to the server I cannot run the downloader.download("TASK:sentiment2") command, so I downloaded the sentiment2 folder and saved it in the same folder as the python file.
I tried to set downloader.download_dir = os.path.join(os.getcwd(),'polyglot_data') pointing at the sentiment2 folder location as it says in the polyglot documentation but it doesnt work.
How do I override downloader directory so it will access the sentiment2 local folder when it executes the sentiment analysis?
Please see the full code below. This code works on my computer and localhost but returns zero when I run it on the server.
from polyglot.downloader import downloader
#downloader.download("TASK:sentiment2")
from polyglot.text import Text
downloader.download_dir = os.path.join(os.getcwd(),'polyglot_data')
def get_text_sentiment(text):
result = 0
ttext = Text(text)
for w in ttext.words:
try:
result += w.polarity
except ValueError:
pass
if result:
return result/ len(ttext.words)
else:
return 0
text = "he is feeling proud with ❤"
print(get_text_sentiment(text))
my localhost returns - 0.1666
the server returns - 0.0
after looking at the polyglot git init function. Its an env. issue. data_path = os.environ.get('POLYGLOT_DATA_PATH', data_path)
So I removed the downloader.download_dir = os.path.join(os.getcwd(),'polyglot_data')
and simply define the local POLYGLOT_DATA_PATH in the env. path.
It worked.

GitLab runner, error on push whereas same thing works on git bash or python from a console

Context
We're trying to do a GitLab runner job that, on a certain tag, modifies a version header file and add a release branch/tag to this changeset.
The GitLab runner server is on my machine, launched as a service by my user (that is properly registered to our GitLab server).
The GitLab runner job basically launches a python script that uses gitpython to du the job, there are just a few changes in runner yml file (added before_script part to be able to have upload permission, got it from there: https://stackoverflow.com/a/55344804/11159476), here is full .gitlab-ci.yml file:
variables:
GIT_SUBMODULE_STRATEGY: recursive
stages: [ build, publish, release ]
release_tag:
stage: build
before_script:
- git config --global user.name ${GITLAB_USER_NAME}
- git config --global user.email ${GITLAB_USER_EMAIL}
script:
- python .\scripts\release_gitlab_runner.py
only:
# Trigger on specific regex...
- /^Src_V[0-9]+\.[0-9]+\.[0-9]+$/
except:
# .. only for tags then except branches, see doc (https://docs.gitlab.com/ee/ci/yaml/#regular-expressions): "Only the tag or branch name can be matched by a regular expression."
- branches
Also added trick in the python URL when pushing (push with user:personal_access_token#repo_URL instead of default runner URL, got it from same answer as above, and token has been generated from company gitlab => user "Settings" => "Access Tokens" => "Add a personal access token" with all rights and never expiring), and here is, not the actual scripts\release_gitlab_runner.py python script but one simplified to have a git flow as much standard as possible for what we want (fetch all, create local branch with random name so that it does not exist, modify a file, stage, commit and finally push):
# -*-coding:utf-8 -*
import uuid
import git
import sys
import os
# Since we are in <git root path>/scripts folder, git root path is this file's path parent path
GIT_ROOT_PATH = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
try:
# Get user login and URL from GITLAB_USER_LOGIN and CI_REPOSITORY_URL gitlab environment variables
gitlabUserLogin = os.environ["GITLAB_USER_LOGIN"]
gitlabFullURL = os.environ["CI_REPOSITORY_URL"]
# Push at "https://${GITLAB_USER_NAME}:${PERSONAL_ACCESS_TOKEN}#gitlab.companyname.net/demo/demo.git")
# generatedPersonalAccessToken has been generated with full rights from https://gitlab.companyname.net/profile/personal_access_tokens and set in a variable not seen here
gitlabPushURL = "https://{}:{}#{}".format(gitlabUserLogin, generatedPersonalAccessToken, gitlabFullURL.split("#")[-1])
print("gitlabFullURL is [{}]".format(gitlabFullURL))
print("gitlabPushURL is [{}]".format(gitlabPushURL))
branchName = str(uuid.uuid1())
print("Build git.Repo object with [{}] root path".format(GIT_ROOT_PATH))
repo = git.Repo(GIT_ROOT_PATH)
print("Fetch all")
repo.git.fetch("-a")
print("Create new local branch [{}]".format(branchName))
repo.git.checkout("-b", branchName)
print("Modify file")
versionFile = os.path.join(GIT_ROOT_PATH, "public", "include" , "Version.h")
patchedVersionFileContent = ""
with open(versionFile, 'r') as versionFileContent:
patchedVersionFileContent = versionFileContent.read()
patchedVersionFileContent = re.sub("#define VERSION_MAJOR 0", "#define VERSION_MAJOR {}".format(75145), patchedVersionFileContent)
with open(versionFile, 'w') as versionFileContent:
versionFileContent.write(patchedVersionFileContent)
print("Stage file")
repo.git.add("-u")
print("Commit file")
repo.git.commit("-m", "New version file in new branch {}".format(branchName))
print("Push new branch [{}] remotely".format(branchName))
# The error is at below line:
repo.git.push(gitlabPushURL, "origin", branchName)
sys.exit(0)
except Exception as e:
print("Exception: {}".format(e))
sys.exit(-1)
Problem
Even with the trick to have rights, when we try to push from GitLab runner following error is raised:
Cmd('git') failed due to: exit code(1)
cmdline: git push https://user:token#gitlab.companyname.net/demo/repo.git origin 85a3fa6e-690a-11ea-a07d-e454e8696d31
stderr: 'error: src refspec origin does not match any
error: failed to push some refs to 'https://user:token#gitlab.companyname.net/demo/repo.git''
What works
If I open a Git Bash, I successfully run manual commands:
git fetch -a
git checkout -b newBranch
vim public/include/Version.h
=> At this point file has been modified
git add -u
git commit -m "New version file in new branch"
git push origin newBranch
Here if we fetch all from elsewhere we can see newBranch with version file modifications
And same if we run script content (without URL modification) from a python command line (assuming all imports as in script have been performed):
GIT_ROOT_PATH = "E:\\path\\to\\workspace\\repo"
branchName = str(uuid.uuid1())
repo = git.Repo(GIT_ROOT_PATH)
repo.git.fetch("-a")
repo.git.checkout("-b", branchName)
versionFile = os.path.join(GIT_ROOT_PATH, "public", "include" , "Version.h")
patchedVersionFileContent = ""
with open(versionFile, 'r') as versionFileContent:
patchedVersionFileContent = versionFileContent.read()
patchedVersionFileContent = re.sub("#define VERSION_MAJOR 0", "#define VERSION_MAJOR {}".format(75145), patchedVersionFileContent)
with open(versionFile, 'w') as versionFileContent:
versionFileContent.write(patchedVersionFileContent)
repo.git.add("-u")
repo.git.commit("-m", "New version file in new branch {}".format(branchName))
repo.git.push("origin", branchName)
Conclusion
I can't find what I do wrong when running from GitLab runner, is there something I'm missing ?
The only thing that I can see different when running from GitLab runner is that after fetch I can see I'm on a detached head (listing repo.git.branch('-a').split('\n') gives for example ['* (HEAD detached at 560976b)', 'branchName', 'remotes/origin/otherExistingBranch', ...]), but this should not be a problem since I create a new branch where to push, right ?
Git said that you used the wrong refspec. When you need to push in other remote you have to make it first gitlab = repo.create_remote("gitlab", gitlabPushURL) and push to it like repo.push("gitlab", branchName).
Edit from #gluttony to not break on next git run with "remote already exists":
remote_name = "gitlab"
if remote_name not in repo.remotes:
repo.create_remote(remote_name, gitlabPushURL)

pyinstaller Error starting service: The service did not respond to the start or control request in a timely fashion

I have been searching since a couple of days for a solution without success.
We have a windows service build to copy some files from one location to another one.
So I build the code shown below with Python 3.7.
The full coding can be found on Github.
When I run the service using python all is working fine, I can install the service and also start the service.
This using commands:
Install the service:
python jis53_backup.py install
Run the service:
python jis53_backup.py start
When I now compile this code using pyinstaller with command:
pyinstaller -F --hidden-import=win32timezone jis53_backup.py
After the exe is created, I can install the service but when trying to start the service I get the error:
Error starting service: The service did not respond to the start or
control request in a timely fashion
I have gone through multiple posts on Stackoverflow and on Google related to this error however, without success. I don't have the option to install the python 3.7 programs on the PC's that would need to run this service. That's why we are trying to get a .exe build.
I have made sure to have the path updated according to the information that I found in the different questions.
Image of path definitions:
I also copied the pywintypes37.dll file.
From -> Python37\Lib\site-packages\pywin32_system32
To -> Python37\Lib\site-packages\win32
Does anyone have any other suggestions on how to get this working?
'''
Windows service to copy a file from one location to another
at a certain interval.
'''
import sys
import time
from distutils.dir_util import copy_tree
import servicemanager
import win32serviceutil
import win32service
from HelperModules.CheckFileExistance import check_folder_exists, create_folder
from HelperModules.ReadConfig import (check_config_file_exists,
create_config_file, read_config_file)
from ServiceBaseClass.SMWinService import SMWinservice
sys.path += ['filecopy_service/ServiceBaseClass',
'filecopy_service/HelperModules']
class Jis53Backup(SMWinservice):
_svc_name_ = "Jis53Backup"
_svc_display_name_ = "JIS53 backup copy"
_svc_description_ = "Service to copy files from server to local drive"
def start(self):
self.conf = read_config_file()
if not check_folder_exists(self.conf['dest']):
create_folder(self.conf['dest'])
self.isrunning = True
def stop(self):
self.isrunning = False
def main(self):
self.ReportServiceStatus(win32service.SERVICE_RUNNING)
while self.isrunning:
# Copy the files from the server to a local folder
# TODO: build function to trigger only when a file is changed.
copy_tree(self.conf['origin'], self.conf['dest'], update=1)
time.sleep(30)
if __name__ == '__main__':
if sys.argv[1] == 'install':
if not check_config_file_exists():
create_config_file()
if len(sys.argv) == 1:
servicemanager.Initialize()
servicemanager.PrepareToHostSingle(Jis53Backup)
servicemanager.StartServiceCtrlDispatcher()
else:
win32serviceutil.HandleCommandLine(Jis53Backup)
I was also facing this issue after compiling using pyinstaller. For me, the issue was that I was using the paths to configs and logs file in dynamic way, for ex:
curr_path = os.path.dirname(os.path.abspath(__file__))
configs_path = os.path.join(curr_path, 'configs', 'app_config.json')
opc_configs_path = os.path.join(curr_path, 'configs', 'opc.json')
log_file_path = os.path.join(curr_path, 'logs', 'application.log')
This was working fine when I was starting the service using python service.py install/start. But after compiling it using pyinstaller, it always gave me error of not starting in timely fashion.
To resolve this, I made all the dynamic paths to static, for ex:
configs_path = 'C:\\Program Files (x86)\\ScantechOPC\\configs\\app_config.json'
opc_configs_path = 'C:\\Program Files (x86)\\ScantechOPC\\configs\\opc.json'
debug_file = 'C:\\Program Files (x86)\\ScantechOPC\\logs\\application.log'
After compiling via pyinstaller, it is now working fine without any error. Looks like when we do dynamic path, it doesn't get the actual path to files and thus it gives error.
Hope this solves your problem too. Thanks

mod_wsgi does not find environment variable in virtualenv

I have some usernames, passwords and other configurations setup in the environment variables of an ec2 instance. I have created a virtualenv setup and active where I run my django server. In the settings file of that django server I access the environment variables as os.environ['variable'].
Outside the virtualenv the site could access those variables fine. When I run printenv, I see all the variables and the values.
However, the server cannot find them and is throwing key errors as a result when I call os.environ on them.
setup = ec2 instance - mod_wsgi - nginx - apache
UPDATE
This started working by setting the variables in django.wsgi in the following way.
os.environ['SQL_PASSWORD'] = 'password'
That alone stopped working once I upgraded to the new ec2 hardware. I am not sure how that was related.
Now, what worked finally was setting the variables individually using SetEnv in the apache config file. Still not optimal because I have to keep the config file checked out on the production machine but it unblocks me.
SetEnv SQL_PASSWORD password
I use nginx and uwsgi on Ec2 and have a chef recipe that builds out my servers. To solve this problem, in chef I have roles that contain credentials not stored in the app's repo.
file at /home/user/web/site/environment that chef creates based on roles.
MYSQL_DATABASE=databasename
MYSQL_USER=databaseuser
MYSQL_PASSWORD=databasepassword
MYSQL_HOST=databaseip
MYSQL_PORT=3306
REDIS_HOST=redishost
REDIS_PASSWORD=redispassword
REDIS_PORT=6379
REDIS_DB=0
MEDIA_ROOT=/home/user/web/site/media
STATIC_ROOT=/home/user/web/site/static
at the beginning of my production/staging/etc settings file I have the following block to read the environment file
import os, re
try:
dirname = os.path.dirname(os.path.abspath(__file__))
# my environment file is always in the same place relative to my project's settings file
env_path = os.path.normpath(os.path.join(dirname, '..', '..', '..', '..', 'environment'))
with open(env_path) as f:
content = f.read()
for line in content.splitlines():
m1 = re.match(r'\A([A-Za-z_0-9]+)=(.*)\Z', line)
if m1:
key, val = m1.group(1), m1.group(2)
m2 = re.match(r"\A'(.*)'\Z", val)
if m2:
val = m2.group(1)
m3 = re.match(r'\A"(.*)"\Z', val)
if m3:
val = re.sub(r'\\(.)', r'\1', m3.group(1))
os.environ.setdefault(key, val)
except IOError:
pass
Voila, settings are source controlled in a separately managed repo (chef's) and loaded into the app. This may not be the BEST way, but it is fairly secure b/c of how permissions are locked down on the chef repo and the target servers and easy to deploy.

OpenERP on uWSGI?

How do I run OpenERP on uWSGI?
I found this wsgi script online, but I'm not sure where to place it?
import openerp
try:
import uwsgi
uwsgi.port_fork_hook = openerp.wsgi.core.on_starting
except:
openerp.wsgi.core.on_starting()
# Equivalent of --load command-line option
openerp.conf.server_wide_modules = ['web']
# internal TODO: use openerp.conf.xxx when available
conf = openerp.tools.config
# Path to the OpenERP Addons repository (comma-separated for
# multiple locations)
conf['addons_path'] = '/home/openerp/addons/trunk,/home/openerp/web/trunk/addons'
# Optional database config if not using local socket
#conf['db_name'] = 'mycompany'
#conf['db_host'] = 'localhost'
#conf['db_user'] = 'foo'
#conf['db_port'] = 5432
#conf['db_password'] = 'secret'
# OpenERP Log Level
# DEBUG=10, DEBUG_RPC=8, DEBUG_RPC_ANSWER=6, DEBUG_SQL=5, INFO=20,
# WARNING=30, ERROR=40, CRITICAL=50
# conf['log_level'] = 20
# If --static-http-enable is used, path for the static web directory
#conf['static_http_document_root'] = '/var/www'
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
application = openerp.wsgi.core.application
I installed OpenERP in a virtual environment in /var/www/openerp/venv and I can run it by calling $ openerp-server.
Thanks in advance.
you can just put the script file in the same directory with the openerp-server.py file.
however when I test it it doesnot work since gunicorn cannot find the openerp in the
import openerp sentence. the reason is that openerp is not installed as a python module to the system with the installation procedures around.
I think it will work when you do a openerp install with the DEB package. (when you make such install you should disable the start script so it will just work from gunicorn.
let me also make a test install and share the result.

Categories

Resources