Pypiserver logging in API - python

I'm trying to create a private Python index using PyPiServer using the API.
According to the documentation I can specify the verbosity and the log file in the pypiserver app setup.
This is what I have:
import pypiserver
from pypiserver import bottle
app = pypiserver.app(root='./packages', password_file='.htpasswd', verbosity=4, log_file='F:\repo\logfile.txt')
bottle.run(app=app, host='itdevws09', port=8181, server='auto')
However when I start it using python mypyserver.py, the index starts up normally and works, however no log file is created. If I create one manually, the log file isn't actually written to.
If I start the pypiserver using the command line using:
pypi-server -p 8080 -P .htpasswd -vvvv --log-file F:/repo/logfile.txt ./packages
The log file is created and written to as normal.
I have tried putting the log-file and verbosity in the bottle.run() method but that doesn't work either. How can I get the logging to work?

Related

connect to a heroku kafka instance with kafka-python from outside heroku

I have set up a heroku kafka instance, and I am trying to connect to it using the python consumer. I have the heroku environment in a file called .env by going heroku config -s > .env, and then load and export it before running this python program:
import os
from kafka import KafkaConsumer
for variable in ['KAFKA_TRUSTED_CERT', 'KAFKA_CLIENT_CERT', 'KAFKA_CLIENT_CERT_KEY']:
with open(f'{variable}.txt', "w") as text_file:
print(os.environ[variable], file=text_file)
consumer = KafkaConsumer('test-topic',
bootstrap_servers=os.environ['KAFKA_URL'],
security_protocol="SSL",
ssl_certfile="KAFKA_CLIENT_CERT.txt",
ssl_keyfile="KAFKA_CLIENT_CERT_KEY.txt"
)
for msg in consumer:
print (msg)
I couldn't find any options that looked like they could load the certificates from a variable, so I put them all in files when I start the program.
When I run the program, it creates the temp files and doesn't complain, but doesn't print any messages.
When I write to the topic using the heroku cli like this
heroku kafka:topics:write test-topic "this is a test"
the python client doesn't print the message, but I can see the message by going
heroku kafka:topics:tail test-topic
Does anybody know what I am missing in the python consumer configuration?
In the official Heroku Kafka documentation:
https://devcenter.heroku.com/articles/kafka-on-heroku#using-kafka-in-python-applications
it states that using the Kafka helper is beneficial. If you look at the source code:
https://github.com/heroku/kafka-helper/blob/master/kafka_helper.py
one can see that they are writing the Kafka variables to files and creating an ssl_context.

How do I download the code for a specific google cloud "service"

This doc show the command to download the source of an app I have in app engine:
appcfg.py -A [YOUR_APP_ID] -V [YOUR_APP_VERSION] download_app [OUTPUT_DIR]
Thats fine, but I also have services that I deployed. Using this command I can only seem to download the "default" service. I also deployed "myservice01" and "myservice02" to app engine in my GCP project. How do I specify the code of a specific service to download?
I tried this command as suggested:
appcfg.py -A [YOUR_APP_ID] -M [YOUR_MODULE] -V [YOUR_APP_VERSION] download_app [OUTPUT_DIR]
It didn't fail but this is the ouput I got (and it didn't download anything)
01:30 AM Host: appengine.google.com
01:30 AM Fetching file list...
01:30 AM Fetching files...
Now as a test I tried it with the name of a module I know doesn't exist and I got this error:
Error 400: --- begin server output ---
Version ... of Module ... does not exist.
So I at least know its successfully finding the module and version, but doesn't seem to want to download them?
Also specify the module (services used to be called modules):
-M MODULE, --module=MODULE
Set the module, overriding the module value from
app.yaml.
So something like:
appcfg.py -A [YOUR_APP_ID] -M [YOUR_MODULE] -V [YOUR_APP_VERSION] download_app [OUTPUT_DIR]
Side note: YOUR_APP_VERSION should really read YOUR_MODULE_VERSION :)
Of course, the answer assumes the app code downloads were not permanently disabled from the Console's GAE App Settings page:
Permanently prohibit code downloads
Once this is set, no one, including yourself, will ever be able to
download the code for this application using the appcfg download_app
command.

yowsup-celery How to run yowsup-celery in daemon mode passing whats app config file as argument

I am using:
yowsup-celery: https://github.com/jlmadurga/yowsup-celery
For trying to integrate whats app in my system.
I have been successfully able to store messages and want to now run celery in daemon mode rather than running in terminal
To run it normally we use:
celery multi start -P gevent -c 2 -l info --yowconfig:conf_wasap
To run daemon mode we use:
sudo /etc/init.d/celeryd start
Here how can I pass config file as argument or is there a way to remove dependency of passing it as an argument rather reading the file inside script.
Since version yowsup-celery 0.2.0 it is possible to pass config file path through configuration instead of argument.
YOWSUPCONFIG = "path/to/credentials/file"

"Operation not Permitted" for Redis

I am developing on a mac which already have redis installed. by default it doesn't have a redis.conf so the default settings were used when I $ redis-server
# Warning: no config file specified, using the default config. In order to specify a config file use 'redis-server /path/to/redis.conf'
I am trying to use redis-py and have the following
import redis
r = redis.Redis('localhost')
r.set('foo','bar')
r.get('foo')
but got the following error
redis.exceptions.ResponseError: operation not permitted
I also tried in terminal $ redis-cli ping, but then i get the following error
(error) ERR operation not permitted
I suppose since there is no redis.conf the default settings doesn't have a password right? Anyways, I also tried to create a redis.conf
$ echo "requirepass foobared" >> redis.conf
$ redis-server redis.conf
then on another window
$ redis-cli
$ redis 127.0.0.1:6379> AUTH foobared
(error) ERR invalid password
also modified the second line of the python script to
r = redis.StrictRedis(host='localhost', port=6379, db=0, password='foobared')
but then I got
redis.exceptions.ResponseError: invalid password
what could I be doing wrong??? Thanks
Without any redis.conf file, Redis uses the default built-in configuration. So the default database file (dump.rdb) and the Append Only File are created in the server directory. Maybe the user running Redis does not have write permission on this dir.
So either you give him the permission, or you define another working directory using a config file.
You'll find a default config file for redis 2.6 here
You must modify this line in the file:
# The working directory.
dir ./

What Unix tool to quickly add/remove some text to a Python script?

I'm developing an application using Flask.
I want a quick, automated way to add and remove debug=True to the main function call:
Development:
app.run(debug=True)
Production:
app.run()
For security reasons, as I might expose private/sensitive information about the app if I leave debug mode on "in the wild".
I was thinking of using sed or awk to automate this in a git hook (production version is kept in a bare remote repo that I push to), or including it in a shell script I am going to write to fire up uwsgi and some other "maintenance"-ey tasks that allow the app to be served up properly.
What do you think?
That is not the way to go! My recommendation is to create some configuration Python module (let us say, config.py) with some content such as:
DEBUG = True
Now, in our current code, write this:
import config
app.run(debug=config.DEBUG)
Now, when you run in production, just change DEBUG from True to False. Or you can leave this file unversioned, so the copy of development is different of the copy of production. This is not uncommon since, for example, one does not use the same database connection params both in development and production.
Even if you want to update it automatically, just call sed on the config file with the -i flag. It is way more secure to update just this one file:
$ sed -i.bkp 's/^ *DEBUG *=.*$/DEBUG = False/' config.py
You should set up some environment variable on server. Your script can detect presense of this variable and disable debugging.
You probably should not be using app.run in production (and you definitely don't need it if you are using uwsgi). Instead, use one of the several deployment options discussed in the deployment section of Flask's excellent documentation. (app.run simply calls werkzeug.serving.run_simple which executes Python's included wsgiref server.)
That being said, the correct way to do this is not with a post-deploy edit to your source code but with a server-specific config file that changes your settings as #brandizzi pointed out in his answer.
You can do this in several different ways (Flask has documentation on this too - see Armin's suggestions on configuring from files and handling the development-production switch):
Include both your development and your server's configs in your repository. Use an environmental variable to switch between them:
# your_app.config.develop
DEBUG = True
# your_app.config.production
DEBUG = False
# your_app.app
from flask import Flask
from os import environ
mode = environ.get("YOURAPP_MODE")
mode = "production" if mode is None else "develop"
config = __import__("your_app.config." + mode)
app = Flask("your_app")
app.config.from_object(config)
Store your production configuration in a separate repository along with any other server-specific configurations you may need. Load the config if an environmental variable is set.
I'd use sed:
sed 's/debug=True//'
portable, scriptable, ubiquitous.
You can also use a NOCOMMIT hook (from gitty):
Set this as a pre-commit hook
if git diff --cached | grep NOCOMMIT > /dev/null; then
echo "You tried to commit a line containing NOCOMMIT"
exit 1
fi
exit 0
This will prevent the commit if it contains NOCOMMIT.
You can of course directly replace NOCOMMIT by Debug=True in the hook.

Categories

Resources