RasaNLUHttpInterpreter: takes from 1 to 4 positional arguments but 5 were given - python

I am using the RasaNLUHttpInterpreter as stated here to start my server. I give the class all the 4 parameters required (model-name, token, server-name, and project-name). However, I always get the error, that apparently I am handing over 5 arguments (what I don't really do).
The error occurred since I updated my Rasa-Core and NLU to the latest version. However, as in docs, I feel that I use the method correctly. Does anyone have an idea what I am doing wrong or what's happening here?
Here is my run-server.py where I use the RasaNLUHttpInterpreter:
import os
from os import environ as env
from gevent.pywsgi import WSGIServer
from server import create_app
from rasa_core import utils
from rasa_core.interpreter import RasaNLUHttpInterpreter
utils.configure_colored_logging("DEBUG")
user_input_dir = "/app/nlu/" + env["RASA_NLU_PROJECT_NAME"] + "/user_input"
if not os.path.exists(user_input_dir):
os.makedirs(user_input_dir)
nlu_interpreter = RasaNLUHttpInterpreter(
'model_20190702-103405', None, 'http://rasa-nlu:5000', 'test_project')
app = create_app(
model_directory = env["RASA_CORE_MODEL_PATH"],
cors_origins="*",
loglevel = "DEBUG",
logfile = "./logs/rasa_core.log",
interpreter = nlu_interpreter)
http_server = WSGIServer(('0.0.0.0', 5005), app)
http_server.serve_forever()
I am using:
rasa_nlu~=0.15.1
rasa_core==0.14.5

as already mentioned here I have analyzed the problem in detail.
First of all, the method calls and the given link belong to a rasa version that is deprecated. After updating to the latest rasa version which splits up core and nlu, the project was refactored to fit the according documentation.
After rebuilding the bot with the exact same setup, no errors were thrown and the bot worked as expected.
We came to the conclusion that this must have been a particular problem on threxx's workstation.
If someone else might come to this point, he is welcome to post here such that we could help him.

Related

Get flow run UUID in Prefect 2.0

I'm currently discovering Prefect and I'm trying to deploy it to schedule workflows. I struggle a bit to understand how to access some data though. Here is my problem: I create a deployment and run it via the Python API and I need the ID of the flow run it creates (to cancel it, may other things happen outside of the flow).
When I run without any scheduling I can access the data I need (the flow run UUID), but I kind of want the scheduling part. It may be because the run_deployment function is asynchronous but as I am nowhere near being an expert in Python I don't know for sure (well that, and the fact that my code never exits after calling the main() function).
Here is what my code looks like:
from prefect import flow, task
from prefect.deployments import Deployment, run_deployment
from datetime import datetime, date, time, timezone
# Import the flow:
from script import my_flow
# Configure the deployment:
deployment_name = "my_deployment"
# Create the deployment for the flow:
deployment = Deployment.build_from_flow(
flow = my_flow,
name = deployment_name,
version = 1,
work_queue_name = "my_queue",
)
deployment.apply()
def main():
# Schedule a flow run based on the deployment:
response = run_deployment(
name = "my_flow/" + deployment_name,
parameters = {my_param},
scheduled_time = dateutil.parser.isoparse(scheduledDate),
flow_run_name = "my_run",
)
print(response)
if __name__ == "__main__":
main()
exit()
I searched a bit and saw in that post that it was possible to print the flow run id as it was executed, but in my case I need before the execution.
Is there anyway to get that data (using the Python API)? Or to set the flow ID myself? (I've already thoroughly checked the docs, I'm quite sure this is not possible)
Thanks a lot for your time!
Gauthier
As of 2.7.12 - released the same day you posted your question - you can create names for flows programmatically. Does that get you what you need?
As of 2.7.12 - released the same day you posted your question - you can create names for flows programmatically. Does that get you what you need?
Both tasks and flows now expose a mechanism for customizing the names of runs! This new keyword argument (flow_run_name for flows, task_run_name for tasks) accepts a string that will be used to create a run name for each run of the function. The most basic usage is as follows:
from datetime import datetime
from prefect import flow, task
#task(task_run_name="custom-static-name")
def my_task(name):
print(f"hi {name}")
#flow(flow_run_name="custom-but-fixed-name")
def my_flow(name: str, date: datetime):
return my_task(name)
my_flow()
This is great, but doesn’t help distinguish between multiple runs of the same task or flow. In order to make these names dynamic, you can template them using the parameter names of the task or flow function, using all of the basic rules of Python string formatting as follows:
from datetime import datetime
from prefect import flow, task
#task(task_run_name="{name}")
def my_task(name):
print(f"hi {name}")
#flow(flow_run_name="{name}-on-{date:%A}")
def my_flow(name: str, date: datetime):
return my_task(name)
my_flow()
See the docs or https://github.com/PrefectHQ/prefect/pull/8378 for more details.
run_deployment returns a flow run object - which you named response in your code.
If you want to get the ID before the flow run is actually executed, you just have to set timeout=0, so that run_deployment will return immediately after submission.
You only have to do:
flow_run = run_deployment(
name = "my_flow/" + deployment_name,
parameters = {my_param},
scheduled_time = dateutil.parser.isoparse(scheduledDate),
flow_run_name = "my_run",
timeout=0
)
print(flow_run.id)

how to fix TypeError: __init__() got an unexpected keyword argument 'session_name'?

I tried to deploy this bot to heroku but i got this error
File "/app/bot/utubebot.py", line 8, in __init__
super().__init__(
TypeError: __init__() got an unexpected keyword argument 'session_name'
utubebot.py code
from pyrogram import Client
from .config import Config
class UtubeBot(Client):
def __init__(self):
super().__init__(
session_name=Config.SESSION_NAME,
bot_token=Config.BOT_TOKEN,
api_id=Config.API_ID,
api_hash=Config.API_HASH,
plugins=dict(root="bot.plugins"),
workers=6,
)
self.DOWNLOAD_WORKERS = 6
self.counter = 0
self.download_controller = {}
to be sincere , I'm a noob in python , i need detailed help plz :)
I don't have enough reputation to comment on Silvio's answer, but he's got half of the answer. The author of the bot you're linking did in fact mean session_name, but you'll note that the latest version of the application was released on Jun 27, 2021. At that time, the latest version of library it's calling pyrogram was on a 1.xx build and took a session_name parameter in the Client class. As of version 2.0.0, it no longer does. The solution is to downgrade the dependency version of pyrogram to one that matches the API utube is developed against, or to upgrade utube to meet the new API.
This kind of thing is why it's important to specify a dependency's version in the requirements.txt file -- had the author anchored the version, saying something like pyrogram==1.2.0, the error wouldn't come up. When you don't specify a version (like just pyrogram, as the author has done), the latest version gets installed, even if there are breaking changes.
Link to the Client implementation in v1.2.0 of pyrogram, which is the latest version before the latest release of utube: https://github.com/pyrogram/pyrogram/blob/v1.2.0/pyrogram/client.py. Notice that the constructor does include session_name, and is otherwise structured pretty differently from the latest release, linked here: https://github.com/pyrogram/pyrogram/blob/master/pyrogram/client.py. It seems like session_name was in fact renamed to session_string, but the semantics of how it's handled and validated are a bit different.
The Client constructor does not take a session_name argument. You can see a full list of the accepted arguments at that link. Perhaps you meant name or session_string. It's difficult to tell from the code you've shown, so I recommend reading that page and seeing which argument you meant to pass.

PytzUsageWarning: The zone attribute is specific to pytz's interface; please migrate to a new time zone provider

I am writing a simple function that sends messages based on a schedule using AsyncIOScheduler.
scheduler = AsyncIOScheduler()
scheduler.add_job(job, "cron", day_of_week="mon-fri", hour = "16")
scheduler.start()
It seems to work, but I always get the following message:
PytzUsageWarning: The zone attribute is specific to pytz's interface; please migrate to a new time zone provider. For more details on how to do so, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html
PytzUsageWarning: The localize method is no longer necessary, as this time zone supports the fold attribute (PEP 495)
I am not familiar with time in python at all, so I am not really sure what this message is asking me to do. I visited the link provided in the message, but the explanation there is too complicated for me. As I understand, I have to migrate to using PEP495, but how exactly can I do just that?
To set a PIP495 compatible timezone in APScheduler, set a parameter when instantiating the scheduler:
scheduler = AsyncIOScheduler(timezone="Europe/Berlin")
scheduler.add_job(job, "cron", day_of_week="mon-fri", hour = "16")
scheduler.start()
With flask-APScheduler (version 1.12.2), add the timezone to the configuration class:
"""Basic Flask Example from flask-apscheduler examples, file jobs.py"""
from flask import Flask
from flask_apscheduler import APScheduler
class Config:
"""App configuration."""
JOBS = [
{
"id": "job1",
"func": "jobs:job1",
"args": (1, 2),
"trigger": "interval",
"seconds": 10,
}
]
SCHEDULER_API_ENABLED = True
SCHEDULER_TIMEZONE = "Europe/Berlin" # <========== add here
def job1(var_one, var_two):
print(str(var_one) + " " + str(var_two))
if __name__ == "__main__":
app = Flask(__name__)
app.config.from_object(Config())
scheduler = APScheduler()
scheduler.init_app(app)
scheduler.start()
app.run()
I was getting the same warning message because of the zone.
What I did was:
import tzlocal
scheduler = AsyncIOScheduler(timezone=str(tzlocal.get_localzone()))
With that local timezone I'm not getting the warning message anymore and it's running correctly.
For now, you can't effectively suppress these warnings, because the problem comes from apscheduler itself.
In your particular case warning comes from here
if obj.zone == 'local':
The problem is pytz is considered deprecated in favor of zoneinfo module and its backports. So even if you set directly timezone argument as suggested before, you'll probably face the same warning from some other places until apsheduler would fix pytz usage for modern Python version.
But still, actually, there is something that you could do, but I'm not sure if such kind of fixes is acceptable for you.
PytzUsageWarning comes from pytz_deprecation_shim package which is the dependency of tzlocal. tzlocal is deeply integrated with pytz. As apscheduler has relatively relax dependency for tzlocal, you can install pretty old version of one, that hasn't such warning, but still acceptable for apsscheduler itself.
pip install tzlocal==2.1
But, please, pay attention it could break other project dependencies, so be careful.
I just changed to
if __name__ == '__main__':
scheduler = BlockingScheduler(timezone=pytz.timezone('Asia/Ho_Chi_Minh'))

bingads V13 report request fails in python sdk

I try to download a bingads report using python SDK, but I keep getting an error says: "Type not found: 'Aggregation'" after submitting a report request. I've tried all 4 options mentioned in the following link:
https://github.com/BingAds/BingAds-Python-SDK/blob/master/examples/v13/report_requests.py
Authentication process prior to request works just fine.
I execute the following:
report_request = get_report_request(authorization_data.account_id)
reporting_download_parameters = ReportingDownloadParameters(
report_request=report_request,
result_file_directory=FILE_DIRECTORY,
result_file_name=RESULT_FILE_NAME,
overwrite_result_file=True, # Set this value true if you want to overwrite the same file.
timeout_in_milliseconds=TIMEOUT_IN_MILLISECONDS
)
output_status_message("-----\nAwaiting download_report...")
download_report(reporting_download_parameters)
after a careful debugging, it seems that the program fails when trying to execute a command within "reporting_service_manager.py". Here is workflow:
download_report(self, download_parameters):
report_file_path = self.download_file(download_parameters)
then:
download_file(self, download_parameters):
operation = self.submit_download(download_parameters.report_request)
then:
submit_download(self, report_request):
self.normalize_request(report_request)
response = self.service_client.SubmitGenerateReport(report_request)
SubmitGenerateReport starts a sequence of events ending with a call to "_SeviceCall.init" function within "service_client.py", returning an exception "Type not found: 'Aggregation'"
try:
response = self.service_client.soap_client.service.__getattr__(self.name)(*args, **kwargs)
return response
except Exception as ex:
if need_to_refresh_token is False \
and self.service_client.refresh_oauth_tokens_automatically \
and self.service_client._is_expired_token_exception(ex):
need_to_refresh_token = True
else:
raise ex
Can anyone shed some light? .
Thanks
Please be sure to set Aggregation e.g., as shown here.
aggregation = 'Daily'
If the report type does not use aggregation, you can set Aggregation=None.
Does this help?
This may be a bit late 2 months after the fact but maybe this will help someone else. I had the same error (though I suppose it may not be the same issue). It does look like you did what I did (and I'm sure others will as well): copy-paste the Microsoft example code and tried to run it only to find that it didn't work.
I spent quite some time trying to debug the issue and it looked to me like the XML wasn't being searched correctly. I was using suds-py3 for the script at the time so I tried suds-community and everything just worked after that.
I also re-read the Bing Ads API walkthrough for getting started again and found that they recommend suds-jurko instead.
Long story short: If you want to use the bingads API don't use suds-py3, use either suds-community (which I can confirm works for everything I've used the API for) or suds-jurko (which is the one recommended by Microsoft).

Call function by URL in Flask

I have a C++ DLL. I want to make a Flask server to call the functions in this DLL. So I use ctypes inside Flask. Here is my code in views.py:
from flask import render_template
from MusicServer_Flask import app
from ctypes import *
import os
#load Dll:
cppdll = CDLL("C:\\VS_projects\\MusicServer_Flask\\NetServerInterface.dll")
#wrapping DLL function:
initnetwork = getattr(cppdll, "?InitNetwork##YAHQAD0H#Z") #The function can be accessed successfully
initnetwork.argtypes = [c_char_p,c_char_p,c_int]
initnetwork.restype = wintypes.BOOL
#define route:
#app.route('/InitNetwork/<LocalIP>/<ServerIP>/<LocalDeviceID>')
def InitNetwork(LocalIP, ServerIP, LocalDeviceID):
return initnetwork(LocalIP, ServerIP, LocalDeviceID)
With the help of this question, I can call this function in Python interactice window successfully using this:InitNetwork(b"192.168.1.101",b"192.168.1.101",555):
However, when I run the Flask project and enter this route:http://192.168.1.102:8081/InitNetwork/b"192.168.1.102"/b"22192.168.1.102"/555 It gives me an error like this:
It seems that the b"192.168.1.102" becomes b%22192.168.1.102%22 in the requested URL. Why this happens? How can I use the right URL to call this function?
Thank you for your attention.
Edit:
Thanks to #Paula Thomas 's answer, I think I moved one step towards the answer.I changed the code as below to convert the first and second input parameters into bytes:
#app.route('/InitNetwork/<LocalIP>/<ServerIP>/<LocalDeviceID>')
def InitNetwork(LocalIP, ServerIP, LocalDeviceID):
b_LocalIP = bytes(LocalIP,'utf-8')
b_ServerIP = bytes(ServerIP,'utf-8')
return initnetwork(b_LocalIP, b_ServerIP, LocalDeviceID)
However, neither http://172.16.4.218:8081/InitNetwork/172.16.4.218/172.16.4.218/555 nor http://172.16.4.218:8081/InitNetwork/"172.16.4.218"/"172.16.4.218"/555 works. Still gives me wrong type error. Would somebody please help?
I think you'll find that Flask is passing you strings (type str) whereas your C or C++ function is expecting bytes (hence the 'b' before two of the parameters) for the first two parameters and an int for the third. It is your responsibility to convert them.

Categories

Resources