When I build a docker image with command line:
docker build -t x .
I can see the process log in terminal.
But with the python API, it doesnt't show anything.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import docker
import os
route = os.path.dirname(os.path.abspath(__file__))
client = docker.from_env()
client.images.build(
path=route,
tag="al3x609/nvnc:latest",
rm=True
)
How can I see it in realtime?
According to the API the build returns:
Returns: The first item is the Image object for the image that was
build. The second item is a generator of the build logs as
JSON-decoded objects
Try something like:
(imageObj, buildlog) = client.images.build(
[...]
Then you can iterate throuhg buildlog:
for logline in buildlog:
print logline
Related
I want to build a simple Slack bolt python project so I followed this document.
but when I use python_dotenv and then run my main file (app.py) I face this error:
As `installation_store` or `authorize` has been used, `token` (or SLACK_BOT_TOKEN env variable) will be ignored.
Although the app should be installed into this workspace, the AuthorizeResult (returned value from authorize) for it was not found.
NOTE: by deleting this line in the main file (app.py):
load_dotenv()
and use the export method for defining tokens, everything works correctly.
this is my main file:
import os
from dotenv import load_dotenv
from slack_bolt import App
from slack_bolt.adapter.socket_mode import SocketModeHandler
import logging
load_dotenv() # by deleting this file, the error will be gone, but I want using the dotenv pip
app = App(token=os.environ.get("SLACK_BOT_TOKEN"))
logger = logging.getLogger(__name__)
#app.message("hello")
def message_hello(message, say):
# say() sends a message to the channel where the event was triggered
say(f"Hey there <#{message['user']}>!")
# Start your app
if __name__ == "__main__":
SocketModeHandler(app, os.environ["SLACK_APP_TOKEN"]).start()
I don't know exactly why, but I removed extra keys in the .env file, and everything went fine.
in .env file:
before:
SLACK_APP_TOKEN=xapp-fake-token
SLACK_BOT_TOKEN=xoxb-fake-token
USER_TOKEN=xoxp-fake-token
SLACK_CLIENT_ID=aa.bb
SLACK_CLIENT_SECRET=YOUR_CLIENT_SECRET
SLACK_API_TOKEN_APP_LEGACY=xoxb-fake-token
SLACK_SIGNING_SECRET=YOUT_SIGNING_SECRET
after:
SLACK_APP_TOKEN=xapp-fake-token
SLACK_BOT_TOKEN=xoxb-fake-token
I'm working on creating my first cookiecutter. By and large, this has gone well, but I now want to add a jinja2 filter of my own.
In line with the comments in this issue, I've created a new Jinja2 extension much like the one here. Full code for this extension is here:
https://github.com/seclinch/sigchiproceedings-cookiecutter/commit/5a314fa7207fa8ab7b4024564cec8bb1e1629cad#diff-f4acf470acf9ef37395ef389c12f8613
However, the following simple example demonstrates the same error:
# -*- coding: utf-8 -*-
from jinja2.ext import Extension
def slug(value):
return value
class PaperTitleExtension(Extension):
def __init__(self, environment):
super(PaperTitleExtension, self).__init__(environment)
environment.filters['slug'] = slug
I've dropped this code into a new jinja2_extensions directory and added a simple __init__.py as follows:
# -*- coding: utf-8 -*-
from paper_title import PaperTitleExtension
__all__ = ['PaperTitleExtension']
Based on this piece of documentation I've also added the following to my `cookiecutter.json' file:
"_extensions": ["jinja2_extensions.PaperTitleExtension"]
However, running this generates the following error:
$ cookiecutter sigchiproceedings-cookiecutter
Unable to load extension: No module named 'jinja2_extensions'
I'm guessing that I'm missing some step here, can anyone help?
Had the same issue, try executing with python3 -m option
My extension in extensions/json_loads.py
import json
from jinja2.ext import Extension
def json_loads(value):
return json.loads(value)
class JsonLoadsExtension(Extension):
def __init__(self, environment):
super(JsonLoadsExtension, self).__init__(environment)
environment.filters['json_loads'] = json_loads
cookiecutter.json
{
"service_name": "test",
"service_slug": "{{ cookiecutter.service_name.replace('-', '_') }}",
...
"_extensions": ["extensions.json_loads.JsonLoadsExtension"]
}
Then I executed with python3 -m cookiecutter . no_input=True timestamp="123" extra_dict="{\"features\": [\"redis\", \"grpc_client\"]}" -f and it works fine.
I ran into a similar error earlier.
Unable to load extension: No module named 'cookiecutter_repo_extensions'
The problem was that in my case there was a dependency to the 'cookiecutter-repo-extension' which I had not installed in my Virtual Environment.
The directory containing your extension needs to be on your PYTHONPATH.
https://github.com/cookiecutter/cookiecutter/issues/1211#issuecomment-522226155
A PR to improve the docs would be appreciated 📖 ✍️ 🙏
I have below py script to download the files from artifactory.
#!/usr/bin/python
# -*- coding: utf-8 -*-
import os
import tarfile
import urllib
from urllib import urlretrieve
import ConfigParser
Config = ConfigParser.ConfigParser()
Config.read('/vivek/release.conf')
code_version = Config.get('main', 'app_version')
os.chdir('/tmp/')
arti_st_url='http://repo.com/artifactory/libs-release- local/com/name/tgz/abc.ear/{0}/abc.ear-{0}.tar.gz'.format(code_version)
arti_st_name='abc.ear-{0}.tar.gz'.format(code_version)
arti_sl_url='http://repo.com/artifactory/libs-release- local/com/name/tgz/def.ear/{0}/def.ear-{0}.tar.gz'.format(code_version)
arti_sl_name='def.ear-{0}.tar.gz'.format(code_version)
urllib.urlretrieve(arti_st_url, arti_st_name)
urllib.urlretrieve(arti_sl_url, arti_sl_name)
oneEAR = 'abc.ear-{0}.tar.gz'.format(code_version)
twoEAR = 'def.ear-{0}.tar.gz'.format(code_version)
tar = tarfile.open(oneEAR)
tar.extractall()
tar.close()
tar1 = tarfile.open(twoEAR)
tar1.extractall()
tar1.close()
os.remove(oneEAR)
os.remove(twoEAR)
This script works perfectly, thanks to stackoverflow.
Here's the next question. There's a variable "protocol" in release.conf. If it's equal to "localcopy", there's an existing py script that does something. If the "protocol" is equal to "artifactory",
above script should be called and executed. How can I achieve it?
Note: I am a beginner in Python, but my tasks are not. So, please help me out guys.
You could simply use:
import os
os.system("script_path")
to execute the script file. But there should be a line called shebang in the very top of that script file, you want to execute. If your python interpreter would be in /usr/bin/python this would be:
#!/usr/bin/python
Assuming you are a Linux user.
In Windows shebang isn't supported. It determines what program to use running *.py file itself.
//Edit:
To call that two scripts depending on a property config value you could just make another script called for example runthis.py which contains instruction like:
protocol = Config.get('main', 'protocol')
if protocol == 'localcopy':
os.system('path_to_localcopy_script)
if protocol == 'antifactory':
os.system('path_to_other_script')
Dont forgot to import needed modules in that new script.
Then you just run script you just made.
That is one way to do this.
If you dont want to create additional script, then put that code you wrote in a function, like:
def main():
...
Your code
...
And on the very bottom of your script file write:
if __name__ = '__main__':
protocol = Config.get('main', 'app_version')
if protocol == 'localcopy':
main()
if protocol == 'antifactory':
os.system('path_to_other_script')
if __name__ = '__main__' would execute only if you run that script by yourself (not by call from an other sctipt for example)
I'm trying to Ansible's Python API in order to write a test API (in Python) which can take advantage of a playbook programmatically and add new nodes to a Hadoop cluster. As we know, at least node in the cluster has to be the Namenode and JobTracker (MRv1). For simplicity lets say the JobTracker and the Namenode are in the same node (namenode_ip).
Thus, in order to use Ansible to create a new node, and have it self registered with the Namenode I've created this following Python utility:
from ansible.playbook import PlayBook
from ansible.inventory import Inventory
from ansible.inventory import Group
from ansible.inventory import Host
from ansible import constants as C
from ansible import callbacks
from ansible import utils
import os
import logging
import config
def run_playbook(ipaddress, namenode_ip, playbook, keyfile):
utils.VERBOSITY = 0
playbook_cb = callbacks.PlaybookCallbacks(verbose=utils.VERBOSITY)
stats = callbacks.AggregateStats()
runner_cb = callbacks.PlaybookRunnerCallbacks(stats, verbose=utils.VERBOSITY)
host = Host(name=ipaddress)
group = Group(name="new-nodes")
group.add_host(host)
inventory = Inventory(host_list=[], vault_password="Hello123")
inventory.add_group(group)
key_file = keyfile
playbook_path = os.path.join(config.ANSIBLE_PLAYBOOKS, playbook)
pb = PlayBook(
playbook=playbook_path,
inventory = inventory,
remote_user='deploy',
callbacks=playbook_cb,
runner_callbacks=runner_cb,
stats=stats,
private_key_file=key_file
)
results = pb.run()
print results
However, Ansible documentation for the Python API is very poorly written (doesn't give any detail, except for a simple example). What I needed was to have a similar thing as:
ansible-playbook -i hadoop_config -e "namenode_ip=1.2.3.4"
That's it, pass the value of namenode_ip dynamically to the inventory file using the Python API. How can I do that?
This should be as simple as adding one or more of these lines to your script after instantiating your group object and before running your playbook:
group.set_variable("foo", "BAR")
I often use the following to quickly fire up a web server to serve HTML content from the current folder (for local testing):
python -m SimpleHTTPServer 8000
Is there a reasonably simple way I can do this, but have the server serve the files with a UTF-8 encoding rather than the system default?
Had the same problem, the following code worked for me.
To start a SimpleHTTPServer with UTF-8 encoding, simply copy/paste the following in terminal (for Python 2).
python -c "import SimpleHTTPServer; m = SimpleHTTPServer.SimpleHTTPRequestHandler.extensions_map; m[''] = 'text/plain'; m.update(dict([(k, v + ';charset=UTF-8') for k, v in m.items()])); SimpleHTTPServer.test();"
Ensure that you have the correct charset in your HTML files beforehand.
EDIT: Update for Python 3:
python3 -c "from http.server import test, SimpleHTTPRequestHandler as RH; RH.extensions_map={k:v+';charset=UTF-8' for k,v in RH.extensions_map.items()}; test(RH)"
The test function also accepts arguments like port and bind so that it's possible to specify the address and the port to listen on.
You can run it with Python scripts too.
from functools import partial
from http.server import SimpleHTTPRequestHandler, test
import os
print('http://localhost:8000/')
wk_dir = os.getcwd()
SimpleHTTPRequestHandler.extensions_map = {k: v + ';charset=UTF-8' for k, v in SimpleHTTPRequestHandler.extensions_map.items()}
test(HandlerClass=partial(SimpleHTTPRequestHandler, directory=wk_dir), port=8000, bind='')
Since test is not in http.server.__all__ so IDE may show a warning, and if you don't want to see it, you can use importlib instead of it. for example:
import importlib
http_server = importlib.import_module('http.server')
http_server.test(HandlerClass=partial(SimpleHTTPRequestHandler, directory=wk_dir), port=8000, bind='')