I use Windows Task Scheduler to run an R Script several times a day. The script transforms some new data and adds it to an existing data file.
I want to use reticulate to call a Python script that will send me an email listing how many rows of data were added, and if any errors occurred. This works correctly when I run it line by line from within RStudio. The problem is that it doesn't work when the script runs on schedule. I get the following errors:
Error in py_run_file_impl(file, local, convert) :
Unable to open file 'setup_smtp.py' (does it exist?)
Error in py_get_attr_impl(x, name, silent) :
AttributeError: module '__main__' has no attribute 'message'
Calls: paste0 ... py_get_attr_or_item -> py_get_attr -> py_get_attr_impl
Execution halted
This github answer https://github.com/rstudio/reticulate/issues/232) makes it sound like reticulate can only be used within RStudio - at least for what I'm trying to do. Does anyone have suggestions?
Sample R script:
library(tidyverse)
library(reticulate)
library(lubridate)
n_rows <- 10
time_raw <- now()
result <- paste0("\nAdded ", n_rows,
" rows to data file at ", time_raw, ".")
try(source_python("setup_smtp.py"))
message_final <- paste0(py$message, result)
try(smtpObj$sendmail(my_email, my_email, message_final))
try(smtpObj$quit())
The Python script ("setup_smtp.py") is like this:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Call from reticulate to log in to email
"""
import smtplib
my_email = '...'
my_password = '...'
smtpObj = smtplib.SMTP('smtp.office365.com', 587)
smtpObj.ehlo()
smtpObj.starttls()
smtpObj.login(my_email, my_password)
message = """From: My Name <email address>
To: My Name <email address>
Subject: Test successful!
"""
This execution problem
This works correctly when I run it line by line from within RStudio. The problem is that it doesn't work when the script runs on schedule
can stem from multiple reasons:
You have multiple Python versions where smtplib is installed on one version (e.g., Python 2.7 or Python 3.6) and not the other. Check which Python is being used at command line, Rscript -e "print(Sys.which("python"))" and RStudio, Sys.which("python"). Explicitly define which Python.exe to run with reticulate's use_python("/path/to/python").
You have multiple R versions where Rscript uses a different version than RStudio. Check R.home() variable in both: Rscript -e "print(R.home())" and call R.home() in RStudio. Explicitly call the required Rscript in appropriate R version bin folder: /path/to/R #.#/bin/Rscript "/path/to/code.R".
You have multiple reticulate packages installed on same R version, residing in different library locations, each calling a different Python version. Check with the matrix: installed.package(), locating the reticulate row. Explicitly call library(reticulate, lib.loc="/path/to/specific/library").
Related
I am facing a basic issue of importing a python script into PIG. i'm just trying a simple script like this:
PIG script
REGISTER 'test.py' using jython as testPyUDF;
load_data = LOAD '$input_path' USING PigStorage(',') AS (row1);
resp = FOREACH load_data generate row1, testPyUDF.testFunc() as (test:CHARARRAY);
DUMP resp;
test.py (Python UDF to import):
#outputSchema("test:chararray")
def testFunc():
return "test"
But this results in an error: ExecException: ERROR 1070: Could not resolve testPyUDF.testFunc using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin., com.yahoo.yst.sds.ULT., org.apache.pig.piggybank.evaluation., org.apache.pig.piggybank.evaluation.datetime., org.apache.pig.piggybank.evaluation.decode., org.apache.pig.piggybank.evaluation.math., org.apache.pig.piggybank.evaluation.stats., org.apache.pig.piggybank.evaluation.string., org.apache.pig.piggybank.evaluation.util., org.apache.pig.piggybank.evaluation.util.apachelogparser., string., util., math., datetime., sequence., util., java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
I even tried changing the python script path in REGISTER to HDFS path:
REGISTER 'hdfs:///user/xxx/......./test.py' using jython as testPyUDF;
and also absolute path on local..
REGISTER '/homes/user/..../test.py' using jython as testPyUDF;
No luck either way. What I am missing here?
Attempting to learn how to call q# from Python code and have it run for real on the IONQ QPU as it does (or appears to do) using VS and >dotnet run of the raw q# code. I followed the guides and workshop.
Python code:
import qsharp
import qsharp.azure
qsharp.projects.add("****path to *******/TestIONQ.csproj")
from TestIONQ import GetRandomResult
print(f"Simulated Result: {GetRandomResult.simulate()}")
print("------------------------------------------------")
qsharp.azure.connect(
subscription = "****************************",
resourceGroup = "**************",
workspace = "************",
location = "******* US")
qsharp.azure.target("ionq.qpu")
result = qsharp.azure.execute(GetRandomResult, jobName="Generate random bit")
print(f" Final result from IONQ - QPU: {result}")
q# code:
namespace TestIONQ {
open Microsoft.Quantum.Canon;
open Microsoft.Quantum.Intrinsic;
//#EntryPoint()
operation GetRandomResult() : Result {
use q = Qubit();
H(q);
return M(q);
}
}
and the .csproj file:
<Project Sdk="Microsoft.Quantum.Sdk/0.16.2104138035">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp3.1</TargetFramework>
<ExecutionTarget>ionq.qpu</ExecutionTarget>
<IQSharpLoadAutomatically>true</IQSharpLoadAutomatically>
</PropertyGroup>
</Project>
The results of running the above Python code in Anaconda qsharp-env environment (Python 3.7.10) are as follows:
Simulated Result: 0
------------------------------------------------
Connected to Azure Quantum workspace ####### in location #####us.
Loading package Microsoft.Quantum.Providers.IonQ and dependencies...
Active target is now ionq.qpu
Submitting TestIONQ.GetRandomResult to target ionq.qpu...
Failed to submit Q# operation TestIONQ.GetRandomResult for execution.
Could not load file or assembly 'System.Text.Json, Version=5.0.0.0, Culture=neutral, PublicKeyToken=####token#####'. The system cannot find the file specified.
Obviously, no problem connecting to Azure and the Workspace. In fact I can run the container-ship optimization example no problem from Python. This also works fine for the first half of the Python code when .simulate() is invoked.
Next, when I try to bypass the IONQ QPU and use its own simulator by changing this one line:
qsharp.azure.target("ionq.simulator")
The resulting error is the same and the results are as follows:
Simulated Result: 1
------------------------------------------------
Connected to Azure Quantum workspace ######## in location #######.
Loading package Microsoft.Quantum.Providers.IonQ and dependencies...
Active target is now ionq.simulator
Submitting TestIONQ.GetRandomResult to target ionq.simulator...
Failed to submit Q# operation TestIONQ.GetRandomResult for execution.
Could not load file or assembly 'System.Text.Json, Version=5.0.0.0, Culture=neutral, PublicKeyToken='....token......'. The system cannot find the file specified.
Traceback (most recent call last):
File "ionq_sim_remote.py", line 18, in <module>
result = qsharp.azure.execute(GetRandomResult, jobName="Generate random bit")
File "F:\Python38\miniconda\envs\qsharp-env\lib\site-packages\qsharp\azure.py", line 137, in execute
if "error_code" in result: raise AzureError(result)
qsharp.azure.AzureError: {'error_code': 1010, 'error_name': 'JobSubmissionFailed', 'error_description': 'Failed to submit the job to the Azure Quantum workspace.'}
This runs very easily on Azure using the q# code snippet within Visual Studio at the command line using a variant of what was shown during the workshop
az quantum execute --target-id ionq.qpu --job-name IONQ_test --resource-group ***rg name*** --workspace-name ***ws name*** --location **** -o table
and indeed this appears to have run on the actual QPU hardware as compared to the simulator (which gives the exact 0.5/0.5 result).
Result Frequency
-------- ----------
0 0.49800000
1 0.50200000
But then calling that same q# code from Python - including the same .csproj file seems to throw this JSON file error - even with the qsharp-env loaded into Anaconda. I apologize if it is something silly that I have done- trying to learn here.
By the way, this works great as a way around the problem with no Anaconda environment required or anything special:
Python:
import os
os.system(f'powershell.exe az quantum execute --target-id ionq.qpu --job-name Pytest --resource-group **** --workspace-name **** --location **** -o table ')
And the result was definitely run on the actual hardware (took a good while):
Result Frequency
-------- ----------- -------------------------
0 0.53200000 ▐███████████ |
1 0.46800000 ▐█████████ |
#Joab.Ai, thank you for posting this issue! We've identified this to be specific to the latest version of qsharp (0.16.2104.138035).
While we are looking into a fix, a workaround will be to downgrade your qsharp package version:
Edit: we have fixed this issue in our latest release! Update to the latest version with this command:
conda install -c quantum-engineering qsharp=0.16.2105.140472
or simply run:
conda update -c quantum-engineering qsharp
I need to check the change of one parameter, if it has changed - I need to restart the script.
code:
import subprocess, os.path, time
from engine.db_manager import DbManager
DB = DbManager(os.path.abspath(os.path.join('../webserver/db.sqlite3')))
tmbot = None
telegram_config = DB.get_config('telegram')
old_telegram_token = ''
vkbot = None
vk_config = DB.get_config('vk')
old_vk_token = ''
while True:
telegram_config = DB.get_config('telegram')
if old_telegram_token != telegram_config['token']:
if vkbot != None:
tmbot.terminate()
tmbot = subprocess.Popen(['python', 'tm/tm_bot.py'])
old_telegram_token = telegram_config['token']
print('telegram token was updated')
vk_config = DB.get_config('vk')
if old_vk_token != vk_config['token']:
if vkbot != None:
vkbot.terminate()
vkbot = subprocess.Popen(['python', 'vk/vk_bot.py'])
old_vk_token = vk_config['token']
print('vk token was updated')
time.sleep(30)
I get errors:
enter image description here
While there might be subtle differences between unix and windows, the straight-up answer is: you can use PYTHONPATH environment variable to let python know where to look for libraries.
However, if you use venv, I'd recommend activating it first, or calling the relevant binary instead of setting the environment variable.
Consider this scenario: you have a venv at /tmp/so_demo/venv, and you try to run this file:
$ cat /tmp/so_demo/myscript.py
import requests
print("great success!")
Running the system python interpreter will not find the requests module, and will yield the following error:
$ python3 /tmp/so_demo/myscript.py
Traceback (most recent call last):
File "/tmp/so_demo/myscript.py", line 1, in <module>
import requests
ModuleNotFoundError: No module named 'requests'
As I have installed requests in the venv, if I provide the path to python, it will know where to look:
$ PYTHONPATH='/tmp/so_demo/venv/lib/python3.8/site-packages/' python3 /tmp/so_demo/myscript.py
great success!
but using the binary inside the venv is better, as you don't need to be familiar with the internal venv directory paths, and is much more portable (notice that the path I provided earlier depicts the minor version):
$ /tmp/so_demo/venv/bin/python /tmp/so_demo/myscript.py
great success!
I am trying to use ALDialog module to have a virtual conversation with the Choregraphe simulated NAO6 robot. I have the below script:
import qi
import argparse
import sys
def main(session):
"""
This example uses ALDialog methods.
It's a short dialog session with two topics.
"""
# Getting the service ALDialog
ALDialog = session.service("ALDialog")
ALDialog.setLanguage("English")
# writing topics' qichat code as text strings (end-of-line characters are important!)
topic_content_1 = ('topic: ~example_topic_content()\n'
'language: enu\n'
'concept:(food) [fruits chicken beef eggs]\n'
'u: (I [want "would like"] {some} _~food) Sure! You must really like $1 .\n'
'u: (how are you today) Hello human, I am fine thank you and you?\n'
'u: (Good morning Nao did you sleep well) No damn! You forgot to switch me off!\n'
'u: ([e:FrontTactilTouched e:MiddleTactilTouched e:RearTactilTouched]) You touched my head!\n')
topic_content_2 = ('topic: ~dummy_topic()\n'
'language: enu\n'
'u:(test) [a b "c d" "e f g"]\n')
# Loading the topics directly as text strings
topic_name_1 = ALDialog.loadTopicContent(topic_content_1)
topic_name_2 = ALDialog.loadTopicContent(topic_content_2)
# Activating the loaded topics
ALDialog.activateTopic(topic_name_1)
ALDialog.activateTopic(topic_name_2)
# Starting the dialog engine - we need to type an arbitrary string as the identifier
# We subscribe only ONCE, regardless of the number of topics we have activated
ALDialog.subscribe('my_dialog_example')
try:
raw_input("\nSpeak to the robot using rules from both the activated topics. Press Enter when finished:")
finally:
# stopping the dialog engine
ALDialog.unsubscribe('my_dialog_example')
# Deactivating all topics
ALDialog.deactivateTopic(topic_name_1)
ALDialog.deactivateTopic(topic_name_2)
# now that the dialog engine is stopped and there are no more activated topics,
# we can unload all topics and free the associated memory
ALDialog.unloadTopic(topic_name_1)
ALDialog.unloadTopic(topic_name_2)
if __name__ == "__main__":
session = qi.Session()
try:
session.connect("tcp://desktop-6d4cqe5.local:9559")
except RuntimeError:
print ("\nCan't connect to Naoqi at IP desktop-6d4cqe5.local(port 9559).\nPlease check your script's arguments."
" Run with -h option for help.\n")
sys.exit(1)
main(session, "desktop-6d4cqe5.local")
My simulated robot has desktop-6d4cqe5.local as IP address and its NAOqi port is running on 63361. I want to run the dialogs outside of the Choregraphe in a python script and only be able to use the dialog box within the Choregraphe to test it. When I ran the above python file I got:
Traceback (most recent call last):
File "C:\Users\...\Documents\...\choregraphe_codes\Welcome\speak.py", line 6, in <module>
import qi
File "C:\Python27\Lib\site-packages\pynaoqi\lib\qi\__init__.py", line 93
async, PeriodicTask)
^
SyntaxError: invalid syntax
I couldn't figure out the problem as there was not much resources online and the robot's documentations are a bit hard to understand.
Please help, thank you.
You are running the script using a Python version greater than 3.5, that sees async as a keyword, now.
NAOqi only supports Python 2.
Try running your script with python2 explicitly.
I have created an event grid triggered azure function in python. I have deployed my solution to azure successfully and the execution is working fine. But, I have an issue with calling another python script in the same folder location. My code is given below: -
import os, json, subprocess
import logging
import azure.functions as func
def main(event: func.EventGridEvent):
try:
correctionsMessages = event.get_json()
for correctionMessage in correctionsMessages:
strMessage = json.dumps(correctionMessage)
full_path_to_script = os.path.join(os.path.dirname(os.path.realpath(__file__)) + '/' + correctionMessage['ScriptName'] + '.py')
logging.info('Script Path: %s', full_path_to_script)
logging.info('Parameter: %s', json.dumps(detectionMessage))
subprocess.check_call('python '+ full_path_to_script + ' ' + json.dumps(strMessage))
result = json.dumps({
'id': event.id,
'data': event.get_json(),
'topic': event.topic,
'subject': event.subject,
'event_type': event.event_type,
})
logging.info('Python EventGrid trigger processed an event: %s', result)
except Exception as e:
logging.info('Error: %s', e)
The above code is giving error for subprocess.check_call. Error is "Error: [Errno 2] No such file or directory: 'python /home/site/wwwroot/Detections/Script1.py". Script1.py is in same folder with init.py. When i am running this function locally, it is working absolutely fine.
Per my experience, the error was caused by the subprocess.check_call function not know the call path of python, not due to the Script1.py path.
On your local for Azure Functions development environment, the python path has been configured in the local environment variable, so the subprocess.check_call function could invoke python via search the python execute file from the paths of environment variable. But on cloud, there is not a python path value pre-configured in the same environment variable, only the Azure Function Host know the real absoluted path for Python.
So the solution is to find out the real absoluted path of Python and use it instead of python in your code.
However, in Azure Function for Python stack runtime, I think it's not a good idea for using subprocess.check_call to spawn a child process to do some processing for a given message. The safe and correct way is to define a function in Script1.py or directly in __init__.py to pass the given message as parameters to realize the same feature.