I'm trying data subscription feature of TDengine.
I tested its Python demo.
from taos.tmq import TaosConsumer
# Syntax: `consumer = TaosConsumer(*topics, **args)`
#
# Example:
consumer = TaosConsumer('topic1', 'topic2', td_connect_ip = "127.0.0.1", group_id = "local")
...
When executing the script, there is an error information:
ImportError: cannot import name 'TaosConsumer'
Did I miss some steps?
I think you need to update the version of taospy on your system - taospy is the TDengine connector for Python.
Try running pip install -U taospy to update it and then do your test again.
Make sure you're running Python 3.7 or later.
Related
I'm trying to change the value of "dockerversion=" in this bash script.
# Docker Variables
containerid=$(docker ps -qf "name=vaultwarden")
imageid=$(docker images -q vaultwarden/server)
dockerversion=1
---------------
# Stop/RM Image
docker stop $containerid
docker rm $containerid
docker rmi $imageid
I'm using python and currently am at
# Pull Portainer Version
url = 'https://github.com/dani-garcia/vaultwarden/releases/latest'
r = requests.get(url)
version = r.url.split('/')[-1]
# Pull Current Version
with open('vaultwarden-update', 'r') as vaultwarden:
fileversion = vaultwarden.readlines()
vcurrentversion = re.sub(r'dockerversion=', '', fileversion[14])
# Check who is higher
if version > vcurrentversion:
with open('vaultwarden-update', '') as vaultwarden:
for line in fileversion[14]:
vaultwarden.write(re.sub(re.escape(vcurrentversion), version))
I basically want python to check the github releases, see if there's a change, compare that number to the bash script variable, update that within the bash-script and run the script.
The # Check who is higher
Won't work as I need to keep the entire other script file. Just mainly looking for ways to append/change a value through python. Dynamically.
Any thoughts?
(this is literally my first python script)
I took your code and changed it just enough to do what you want it to do. You would need to do some value validations, so that you could log and exit before execution comes to the final part where you rewrite your file (you don't want to open and rewrite if there is nothing to rewrite with).
import re
import requests
from distutils.version import LooseVersion
# Pull Portainer Version
url = 'https://github.com/dani-garcia/vaultwarden/releases/latest'
r = requests.get(url)
github_version = r.url.split('/')[-1]
# Pull Current Version
with open('vaultwarden-update') as vaultwarden:
file_content = vaultwarden.read()
file_match = re.search(r'(dockerversion=([0-9.]*))', file_content)
file_version = file_match.group(2)
# Check who is higher
if LooseVersion(github_version) > LooseVersion(file_version):
print(f'github version ({github_version}) > file version ({file_version})')
with open('vaultwarden-update', 'w') as vaultwarden:
new_file_content = file_content.replace(file_match.group(1), f'dockerversion={github_version}')
vaultwarden.write(new_file_content)
Currently for your content, it outputs:
github version (1.23.0) > file version (1)
I use Windows Task Scheduler to run an R Script several times a day. The script transforms some new data and adds it to an existing data file.
I want to use reticulate to call a Python script that will send me an email listing how many rows of data were added, and if any errors occurred. This works correctly when I run it line by line from within RStudio. The problem is that it doesn't work when the script runs on schedule. I get the following errors:
Error in py_run_file_impl(file, local, convert) :
Unable to open file 'setup_smtp.py' (does it exist?)
Error in py_get_attr_impl(x, name, silent) :
AttributeError: module '__main__' has no attribute 'message'
Calls: paste0 ... py_get_attr_or_item -> py_get_attr -> py_get_attr_impl
Execution halted
This github answer https://github.com/rstudio/reticulate/issues/232) makes it sound like reticulate can only be used within RStudio - at least for what I'm trying to do. Does anyone have suggestions?
Sample R script:
library(tidyverse)
library(reticulate)
library(lubridate)
n_rows <- 10
time_raw <- now()
result <- paste0("\nAdded ", n_rows,
" rows to data file at ", time_raw, ".")
try(source_python("setup_smtp.py"))
message_final <- paste0(py$message, result)
try(smtpObj$sendmail(my_email, my_email, message_final))
try(smtpObj$quit())
The Python script ("setup_smtp.py") is like this:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Call from reticulate to log in to email
"""
import smtplib
my_email = '...'
my_password = '...'
smtpObj = smtplib.SMTP('smtp.office365.com', 587)
smtpObj.ehlo()
smtpObj.starttls()
smtpObj.login(my_email, my_password)
message = """From: My Name <email address>
To: My Name <email address>
Subject: Test successful!
"""
This execution problem
This works correctly when I run it line by line from within RStudio. The problem is that it doesn't work when the script runs on schedule
can stem from multiple reasons:
You have multiple Python versions where smtplib is installed on one version (e.g., Python 2.7 or Python 3.6) and not the other. Check which Python is being used at command line, Rscript -e "print(Sys.which("python"))" and RStudio, Sys.which("python"). Explicitly define which Python.exe to run with reticulate's use_python("/path/to/python").
You have multiple R versions where Rscript uses a different version than RStudio. Check R.home() variable in both: Rscript -e "print(R.home())" and call R.home() in RStudio. Explicitly call the required Rscript in appropriate R version bin folder: /path/to/R #.#/bin/Rscript "/path/to/code.R".
You have multiple reticulate packages installed on same R version, residing in different library locations, each calling a different Python version. Check with the matrix: installed.package(), locating the reticulate row. Explicitly call library(reticulate, lib.loc="/path/to/specific/library").
I am having a python script along with ansible modules integrated. While executing in PyCharm, getting AnsibleModule not defined and it is not recognizing any ansible keywords. Any idea? Do I need to install any additional plugins for PyCharm to execute ansible based Python scripts? Here is my code
def main():
module = AnsibleModule(
argument_spec = dict(
hostvars = dict(required=True, type='dict'),
topology = dict(required=True, type='list'),
my_hostname = dict(required=True)
)
)
try:
hostvars = module.params['hostvars']
topology = module.params['topology']
my_hostname = module.params['my_hostname']
facts = get_facts(hostvars, topology, my_hostname)
module.exit_json(changed=False, ansible_facts=facts)
except Exception as error:
module.fail_json(msg=str(error))
Thanks,
Sridhar A
You have to set up your project interpreter in pycharm. IIRC, this may be a feature only available in pycharm pro. You should point the interpreter at your virtualenv and that will get you up and moving.
I've used startup scripts on Google Cloud Compute Instances:
setsid python home/junaid_athar/pull.py
And I can run the following script on the VM without issue when logged in at the root directory:
setsid python3 home/junaid_athar/btfx.py
however, when I add setsid python3 home/junaid_athar/btfx.py to the startup-script it throws an error saying:
ImportError: cannot import name 'opentype'
The same script runs fine when I'm logged in, but not when I run it as a startup-script, why and how do I resolve it?
Update: I'm pretty new to programming, and hack away. Here's the script:
import logging
import time
import sys
import json
from btfxwss import BtfxWss
from google.cloud import bigquery
log = logging.getLogger(__name__)
fh = logging.FileHandler('/home/junaid_athar/test.log')
fh.setLevel(logging.CRITICAL)
sh = logging.StreamHandler(sys.stdout)
sh.setLevel(logging.CRITICAL)
log.addHandler(sh)
log.addHandler(fh)
logging.basicConfig(level=logging.DEBUG, handlers=[fh, sh])
def stream_data(dataset_id, table_id, json_data):
bigquery_client = bigquery.Client()
dataset_ref = bigquery_client.dataset(dataset_id)
table_ref = dataset_ref.table(table_id)
data = json.loads(json_data)
# Get the table from the API so that the schema is available.
table = bigquery_client.get_table(table_ref)
rows = [data]
errors = bigquery_client.create_rows(table, rows)
wss=BtfxWss()
wss.start()
while not wss.conn.connected.is_set():
time.sleep(2)
# Subscribe to some channels
wss.subscribe_to_trades('BTCUSD')
# Do something else
t = time.time()
while time.time() - t < 5:
pass
# Accessing data stored in BtfxWss:
trades_q = wss.trades('BTCUSD') # returns a Queue object for the pair.
while True:
while not trades_q.empty():
item=trades_q.get()
if item[0][0]=='te':
json_data={'SEQ':item[0][0], 'ID':item[0][1][0], 'TIMESTAMP':int(str(item[0][1][1])[:10]) , 'PRICE':item[0][1][3], 'AMOUNT':item[0][1][2], 'UNIQUE_TS':item[0][1][1], 'SOURCE':'bitfinex'}
stream_data('gdax','btfxwss', json.dumps(json_data))
# Unsubscribing from channels:
wss.unsubscribe_from_trades('BTCUSD')
# Shutting down the client:
wss.stop()
I'm running it on a Standard 1-CPU 3.75mem machine. (Debian GNU/Linux 9 (stretch)).
I THINK the problem is with the install directory of python3 & modules and the difference between how start-up scripts are ran vs being logged into the machine-- how do I troubleshoot that?
Figured out what was going on. Startup scripts are run as the (on the?) root. I added -u username to the start of the startup script, and it ran as though I were SSH'ed into the server. All is good, thanks all for your help!
How do I run OpenERP on uWSGI?
I found this wsgi script online, but I'm not sure where to place it?
import openerp
try:
import uwsgi
uwsgi.port_fork_hook = openerp.wsgi.core.on_starting
except:
openerp.wsgi.core.on_starting()
# Equivalent of --load command-line option
openerp.conf.server_wide_modules = ['web']
# internal TODO: use openerp.conf.xxx when available
conf = openerp.tools.config
# Path to the OpenERP Addons repository (comma-separated for
# multiple locations)
conf['addons_path'] = '/home/openerp/addons/trunk,/home/openerp/web/trunk/addons'
# Optional database config if not using local socket
#conf['db_name'] = 'mycompany'
#conf['db_host'] = 'localhost'
#conf['db_user'] = 'foo'
#conf['db_port'] = 5432
#conf['db_password'] = 'secret'
# OpenERP Log Level
# DEBUG=10, DEBUG_RPC=8, DEBUG_RPC_ANSWER=6, DEBUG_SQL=5, INFO=20,
# WARNING=30, ERROR=40, CRITICAL=50
# conf['log_level'] = 20
# If --static-http-enable is used, path for the static web directory
#conf['static_http_document_root'] = '/var/www'
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
application = openerp.wsgi.core.application
I installed OpenERP in a virtual environment in /var/www/openerp/venv and I can run it by calling $ openerp-server.
Thanks in advance.
you can just put the script file in the same directory with the openerp-server.py file.
however when I test it it doesnot work since gunicorn cannot find the openerp in the
import openerp sentence. the reason is that openerp is not installed as a python module to the system with the installation procedures around.
I think it will work when you do a openerp install with the DEB package. (when you make such install you should disable the start script so it will just work from gunicorn.
let me also make a test install and share the result.