I'm trying to write a "hello world" of IPC between a node js and a python3 application. At the moment I'm having a parse error when a new message arrives the Python app. The codes are:
Python:
from ipcqueue import posixmq
from time import sleep
q1 = posixmq.Queue('/fila1')
while True:
while q1.qsize() > 0:
print('p1: Recebi na minha fila: ' + str(q1.get()))
sleep(0.5)
Node:
const PosixMQ = require('posix-mq')
var mq = new PosixMQ();
mq.open({
name: '/fila1',
create: true,
mode: '0777',
maxmsgs: 10,
msgsize: 11
});
mq.push("hello world")
mq.close();
When the second app sends the message, the python app fails with:
File "../test-fila1.py", line 9, in
print('p1: Recebi na minha fila: ' + str(q1.get())) File "/usr/lib/python3.7/site-packages/ipcqueue/posixmq.py", line 174, in
get
return self._serializer.loads(data) File "/usr/lib/python3.7/site-packages/ipcqueue/serializers.py", line 14,
in loads
return pickle.loads(data) KeyError: 101
[2]- Exit 1 python3 ../test-fila1.py
[3]+ Done node index.js
EDIT
So, I decided to change the question from "KeyError: 101 when loading ipc message between node and python apps" to "How to IPC between node and python using posix message queue?" because I just noticed that the sample code I posted generates different errors everytime I run them. Now the error that is happening is (however, no code was changed):
Traceback (most recent call last): File "snippets/test-fila1.py",
line 9, in
print('p1: Recebi na minha fila: ' + str(q1.get())) File "/usr/lib/python3.7/site-packages/ipcqueue/posixmq.py", line 174, in
get
return self._serializer.loads(data) File "/usr/lib/python3.7/site-packages/ipcqueue/serializers.py", line 14,
in loads
return pickle.loads(data)
_pickle.UnpicklingError: unpickling stack underflow
If you look the __init__ of the Queue
class Queue(object):
"""
POSIX message queue.
"""
def __init__(self, name, maxsize=10, maxmsgsize=1024, serializer=PickleSerializer):
As you can see the default serializer is PickleSerializer, which means it assumes the data coming on the queue is pickled, while you are sending raw data from nodejs. The fix is simple, use the RawSerializer
from ipcqueue import posixmq
from time import sleep
from ipcqueue.serializers import RawSerializer
q1 = posixmq.Queue('/fila1', serializer=RawSerializer)
while True:
while q1.qsize() > 0:
print('p1: Recebi na minha fila: ' + str(q1.get()))
sleep(0.5)
Related
I've been trying to follow the examples and documentation for the python ad_manager library for the google ads API, but I haven't been able to complete a successful request. I currently have my developer token, client_id, client_secret, and refresh_token in my google ads YAML file, but I'm constantly getting the error "argument should be integer or bytes-like object, not 'str'" when calling the function WaitForReport following the example code below. I was wondering if anyone had any advice on how I could tackle this issue.
import tempfile
# Import appropriate modules from the client library.
from googleads import ad_manager
from googleads import errors
def main(client):
# Initialize a DataDownloader.
report_downloader = client.GetDataDownloader(version='v202111')
# Create report job.
report_job = {
'reportQuery': {
'dimensions': ['COUNTRY_NAME', 'LINE_ITEM_ID', 'LINE_ITEM_NAME'],
'columns': ['UNIQUE_REACH_FREQUENCY', 'UNIQUE_REACH_IMPRESSIONS',
'UNIQUE_REACH'],
'dateRangeType': 'REACH_LIFETIME'
}
}
try:
# Run the report and wait for it to finish.
report_job_id = report_downloader.WaitForReport(report_job)
except errors.AdManagerReportError as e:
print('Failed to generate report. Error was: %s' % e)
# Change to your preferred export format.
export_format = 'CSV_DUMP'
report_file = tempfile.NamedTemporaryFile(suffix='.csv.gz', delete=False)
# Download report data.
report_downloader.DownloadReportToFile(
report_job_id, export_format, report_file)
report_file.close()
# Display results.
print('Report job with id "%s" downloaded to:\n%s' % (
report_job_id, report_file.name))
if __name__ == '__main__':
# Initialize client object.
ad_manager_client = ad_manager.AdManagerClient.LoadFromStorage()
main(ad_manager_client)
Edit:
Below is the stack trace:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/googleads/common.py", line 984, in MakeSoapRequest
return soap_service_method(
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/zeep/proxy.py", line 46, in __call__
return self._proxy._binding.send(
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/zeep/wsdl/bindings/soap.py", line 135, in send
return self.process_reply(client, operation_obj, response)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/zeep/wsdl/bindings/soap.py", line 229, in process_reply
return self.process_error(doc, operation)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/zeep/wsdl/bindings/soap.py", line 317, in process_error
raise Fault(
zeep.exceptions.Fault: Unknown fault occured
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "google_ads.py", line 72, in <module>
main(ad_manager_client)
File "google_ads.py", line 33, in main1
report_job_id = report_downloader.WaitForReport(report_job)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/googleads/ad_manager.py", line 784, in WaitForReport
report_job_id = service.runReportJob(report_job)['id']
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/googleads/common.py", line 989, in MakeSoapRequest
underlying_exception = e.detail.find(
TypeError: argument should be integer or bytes-like object, not 'str'
In your YAML file, do you have your account number in quotes? (either single or double?)
Additionally, I would highly recommend not going with this API if you have the option. It will be sunset in April and will no longer work. The newer google ads API (as opposed to the AdWords API) is available, stable and much easier to work with. The ad manager examples are good too.
The problem seems to be that zeep raises a WebFault which includes the returned XML response as a string in zeep.Fault.detail.
Somewhat counter-intuitive, this attribute is not a string, but a bytes sequence because zeep.wsdl.utils.etree_to_string calls etree.tostring() with encoding="utf-8" instead of encoding="unicode"—the latter would make sure it's a proper string.
googleads then tries to look for specific error strings inside the XML using find(), but even though find() is defined both on str and bytes, the type of the substring to look for needs to align.
Thus, in
underlying_exception = e.detail.find(
'{%s}ApiExceptionFault' % self._GetBindingNamespace())
bytes.find() is called with a str argument, causing the ValueError you experience.
I'd argue that zeep.wsdl.utils.etree_to_string() should be adjusted to actually return a str instead of bytes. You could try opening an issue on Zeep's Github repository.
So, I'm still at the noob level when it comes to python. I know... I know... there's probably a more efficient way to do what I'm trying but still learning and hopefully, I'll get better with practice.
For a training project, I'm writing a script to do various DNS operations against a domain. I found DNSPython and it seemed to be exactly what I needed to use and I thought I was done with it but when I tried it against a different domain it keeps failing at the zone transfer.
I've got two domains hardcoded right now for testing. The megacorpone domain iw was working as I expected however, now it's failing (with no code change) in order to get it to work I had to filter the first record '#' that was returned otherwise it failed as well.
However, the zonetransfer.me domain sometimes completes the script with error but fails errors sporadically as well, but it never displays the host records for some reason and I've not been able to figure out how to fix it yet, been banging my head against it for a while now.
The megacoprone run was working every time earlier now it's not working at all. The only thing I can think of so far is that it may be a timing issue.
Run with megacoprpone
Attempting zone transfers for megacorpone.com
Traceback (most recent call last):
File "/home/kali/Exercises/Module_7/dns-axfer.py", line 56, in zoneXFR
zone = dns.zone.from_xfr(dns.query.xfr(str(server).rstrip('.'), domain))
File "/usr/lib/python3/dist-packages/dns/zone.py", line 1106, in from_xfr
for r in xfr:
File "/usr/lib/python3/dist-packages/dns/query.py", line 627, in xfr
raise TransferError(rcode)
dns.query.TransferError: Zone transfer error: REFUSED
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/kali/Exercises/Module_7/dns-axfer.py", line 73, in <module>
zoneXFR()
File "/home/kali/Exercises/Module_7/dns-axfer.py", line 66, in zoneXFR
print ("\nResults for",server, "\nZone origin:", str(zone.origin).rstrip('.'))
UnboundLocalError: local variable 'zone' referenced before assignment
Run 1 with zonetransfer.me
Attempting zone transfers for zonetransfer.me
Results for nsztm1.digi.ninja.
Zone origin: zonetransfer.me
---------------------------------------------------------------------------
Results for nsztm1.digi.ninja.
Zone origin: zonetransfer.me
---------------------------------------------------------------------------
[*] Error: <class 'dns.resolver.NoAnswer'> The DNS response does not contain an answer to the question: _acme-challenge.zonetransfer.me. IN A
Results for nsztm2.digi.ninja.
Zone origin: zonetransfer.me
---------------------------------------------------------------------------
Results for nsztm2.digi.ninja.
Zone origin: zonetransfer.me
---------------------------------------------------------------------------
[*] Error: <class 'dns.resolver.NoAnswer'> The DNS response does not contain an answer to the question: _acme-challenge.zonetransfer.me. IN A
Run 2 with no code change (zonetransfer.me)
Attempting zone transfers for zonetransfer.me
Traceback (most recent call last):
File "/home/kali/Exercises/Module_7/dns-axfer.py", line 56, in zoneXFR
zone = dns.zone.from_xfr(dns.query.xfr(str(server).rstrip('.'), domain))
File "/usr/lib/python3/dist-packages/dns/zone.py", line 1106, in from_xfr
for r in xfr:
File "/usr/lib/python3/dist-packages/dns/query.py", line 596, in xfr
_net_write(s, tcpmsg, expiration)
File "/usr/lib/python3/dist-packages/dns/query.py", line 364, in _net_write
current += sock.send(data[current:])
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/kali/Exercises/Module_7/dns-axfer.py", line 73, in <module>
zoneXFR()
File "/home/kali/Exercises/Module_7/dns-axfer.py", line 66, in zoneXFR
print ("\nResults for",server, "\nZone origin:", str(zone.origin).rstrip('.'))
UnboundLocalError: local variable 'zone' referenced before assignment
My script: bash away... I can always take constructive criticism.
#!/usr/bin/python3
import sys, argparse
import dns.query
import dns.zone
import dns.resolver
from colorama import Fore, Style
bracket = f"{Fore.BLUE}[{Fore.GREEN}*{Fore.BLUE}]{Style.RESET_ALL} "
bracket_err = f"{Fore.BLUE}[{Fore.RED}*{Fore.BLUE}]{Style.RESET_ALL} "
'''
parser = argparse.ArgumentParser()
parser.add_argument('domain')
args = parser.parse_args()
'''
# domain = (sys.argv[1])
domain = 'megacorpone.com'
#domain = 'zonetransfer.me'
def line():
print ('-' * 75)
return None
def resolveDNS(system):
resolver = dns.resolver.Resolver()
results = resolver.query(system , "A")
return results
def getNS ():
name_servers = dns.resolver.query(domain, 'NS')
print ("\nThe name servers for " + domain + " are:")
line()
for system in name_servers:
A_records = resolveDNS(str(system))
for item in A_records:
answer = ','.join([str(item)])
print (bracket, "{:30}".format(str(system).rstrip('.')), "{:15}".format(answer))
return name_servers
def getMX():
mail_server = dns.resolver.query(domain, 'MX')
print("\nMail servers for", domain)
line()
for system in mail_server:
A_records = resolveDNS(str(system.exchange))
for item in A_records:
answer = ','.join([str(item)])
print(bracket, "{:30}".format(str(system.exchange).rstrip('.')), "{:15}".format(str(answer)), '\t', "{:5}".format("Preference:"), str(system.preference))
return None
def zoneXFR():
print ("\nAttempting zone transfers for", domain,)
for server in name_servers:
try:
zone = dns.zone.from_xfr(dns.query.xfr(str(server).rstrip('.'), domain))
print ("\nResults for",server, "\nZone origin:", str(zone.origin).rstrip('.'))
line()
for host in zone:
if str(host) != '#':
A_records = resolveDNS(str(host) + "." + domain)
for item in A_records:
answer = ','.join([str(item)])
print(bracket, "{:30}".format(str(host) + "." + domain), answer)
except Exception as e:
print ("\nResults for",server, "\nZone origin:", str(zone.origin).rstrip('.'))
line()
print (bracket_err, f"{Fore.RED}Error:{Style.RESET_ALL}", e.__class__, e)
name_servers = getNS()
getMX()
zoneXFR()
print("\n")
I see that you are trying well-known name servers that are specifically set up for testing. However, for the benefit of other readers, I will add a couple explanations.
As you are probably aware, most name servers will not allow zone transfers nowadays. That being said, it is possible that each of the name servers for a given domain name will behave differently (they could have different configurations and even be running different software).
In the case of megacorpone.com there are 3 name servers listed:
ns2.megacorpone.com.
ns3.megacorpone.com.
ns1.megacorpone.com.
ns2.megacorpone.com is the only one that did allow a zone transfer.
This message
dns.query.TransferError: Zone transfer error: REFUSED
means what it means: your query was refused. Probably you talked to the wrong name server.
Then you have another error which suggest a variable scoping issue:
UnboundLocalError: local variable 'zone' referenced before assignment
You are calling functions in this order:
name_servers = getNS()
getMX()
zoneXFR()
If name_servers fails, then the subsequent call to zoneXFR will fail too. Because this code:
for server in name_servers:
will try to iterate over an empty list.
Intermittent DNS resolution failures are common so a few checks are required here. At the very least, make sure that the list of NS is not empty.
Another issue: you start a for loop outside of the try block so your control structure is broken right in the middle:
for server in name_servers:
try:
zone = dns.zone.from_xfr(dns.query.xfr(str(server).rstrip('.'), domain))
print ("\nResults for",server, "\nZone origin:", str(zone.origin).rstrip('.'))
line()
Do this instead:
try:
for server in name_servers:
zone = dns.zone.from_xfr(dns.query.xfr(str(server).rstrip('.'), domain))
print ("\nResults for",server, "\nZone origin:", str(zone.origin).rstrip('.'))
...
I suspect that your script fails intermittently because the list of name servers is not always returned in the same order. If the first NS returned is ns1.megacorpone.com. or ns3.megacorpone.com. then the code crashes. If the scripts starts with ns2.megacorpone.com (the sole NS allowing zone transfers) then it seems to work OK.
When this code fails (AXFR denied):
zone = dns.zone.from_xfr(dns.query.xfr(str(server).rstrip('.'), domain))
then zone is not defined, and that's why you cannot print it in your exception block. Instead, show the domain name or some other variables that you know are defined and valid.
So if the AXFR is denied, your script should handle this exception dns.query.TransferError, and quietly move on to the next NS if any, until the list has been exhausted.
Another bit of advice: you try to resolve resources names that are different than '#'. Instead, look at the record type. You should only resolve CNAME, MX or NS. The other common types are TXT, A, AAAA, SOA. The rest are more exotic such as NAPTR, LOC or SRV. Nothing that should be resolved I think.
Fixed up your code, doesn't look great yet, but it works
#!/usr/bin/python3
# you might want to run python3 -m pip install dnspython before running this script
import sys
import dns.query
import dns.zone
import dns.resolver
# formatting setup
from colorama import Fore, Style
bracket = f"{Fore.BLUE}[{Fore.GREEN}*{Fore.BLUE}]{Style.RESET_ALL} "
bracket_err = f"{Fore.BLUE}[{Fore.RED}*{Fore.BLUE}]{Style.RESET_ALL} "
def drawLine():
print ('-' * 75)
# read arguments
try:
domain = (sys.argv[1])
except:
print("[!] USAGE: python3 zt.py DOMAIN_NAME")
sys.exit(0)
# DNS functions
def resolveDNS(name):
resolver = dns.resolver.Resolver()
results = resolver.query(name , "A")
return results
def getNS (domain):
mapping = {}
name_servers = dns.resolver.query(domain, 'NS')
print ("\nThe name servers for " + domain + " are:")
drawLine()
for name_server in name_servers:
A_records = resolveDNS(str(name_server))
for item in A_records:
answer = ','.join([str(item)])
mapping[str(name_server)] = answer
print (bracket, "{:30}".format(str(name_server).rstrip('.')), "{:15}".format(answer))
return mapping
def zoneXFR(server):
try:
zone = dns.zone.from_xfr(dns.query.xfr(str(server).rstrip('.'), domain))
except Exception as e:
print (bracket_err, f"{Fore.RED}Error:{Style.RESET_ALL}", e.__class__, e)
else:
print ("\nResults for",server, "\nZone origin:", str(zone.origin).rstrip('.'))
drawLine()
for host in zone:
if str(host) != '#':
A_records = resolveDNS(str(host) + "." + domain)
for item in A_records:
answer = ','.join([str(item)])
print(bracket, "{:30}".format(str(host) + "." + domain), answer)
drawLine()
name_servers = getNS(domain)
for server in name_servers:
print ("\nAttempting zone transfers for", server,name_servers[server])
zoneXFR(name_servers[server])
This error has been arising for quite sometime but doesn't like to appear very frequently, it's time I squashed it.
I see that it appears whenever I have more than a single thread. The applications quite extensive so I'll be posting the code snippet down below.
I'm using the datetime.datetime strptime to format my message into a datetime object. When I use this within a multithreaded function, that error arises on the one thread but works perfectly fine on the other.
The error
Exception in thread Thread-7:
Traceback (most recent call last):
File "C:\Users\kdarling\AppData\Local\Continuum\anaconda3\envs\lobsandbox\lib\threading.py", line 801, in __bootstrap_inner
self.run()
File "C:\Users\kdarling\AppData\Local\Continuum\anaconda3\envs\lobsandbox\lib\threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "d:\kdarling_lob\gryphon_eagle\proj_0012_s2_lobcorr\main\lobx\src\correlator\handlers\inputhandler.py", line 175, in _read_socket
parsed_submit_time = datetime.datetime.strptime(message['msg']['submit_time'], '%Y-%m-%dT%H:%M:%S.%fZ')
AttributeError: 'module' object has no attribute '_strptime'
Alright so this is a bit strange as this was called on the first thread but the second thread works completely fine by using datetime.datetime.
Thoughts
Am I overwriting anything? It is Python afterall. Nope, I don't use datetime anywhere.
I am using inheritance through ABC, how about the parent? Nope, this bug was happening long before and I don't overwrite anything in the parent.
My next go to was thinking "Is is this Python blackmagic with the Datetime module?" in which I decided to time.strptime.
The error
Exception in thread Thread-6:
Traceback (most recent call last):
File "C:\Users\kdarling\AppData\Local\Continuum\anaconda3\envs\lobsandbox\lib\threading.py", line 801, in __bootstrap_inner
self.run()
File "C:\Users\kdarling\AppData\Local\Continuum\anaconda3\envs\lobsandbox\lib\threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "d:\kdarling_lob\gryphon_eagle\proj_0012_s2_lobcorr\main\lobx\src\correlator\handlers\inputhandler.py", line 176, in _read_socket
parsed_submit_time = time.strptime(message['msg']['submit_time'], '%Y-%m-%dT%H:%M:%S.%fZ')
AttributeError: 'module' object has no attribute '_strptime_time'
The code
import datetime
import heapq
import json
import os
import socket
import sys
import time
from io import BytesIO
from threading import Thread
from handler import Handler
class InputStreamHandler(Handler):
def __init__(self, configuration, input_message_heap):
"""
Initialization function for InputStreamHandler.
:param configuration: Configuration object that stores specific information.
:type configuration: Configuration
:param input_message_heap: Message heap that consumers thread will populate.
:type input_message_heap: Heap
"""
super(InputStreamHandler, self).__init__()
self.release_size = configuration.get_release_size()
self.input_src = configuration.get_input_source()
self.input_message_heap = input_message_heap
self.root_path = os.path.join(configuration.get_root_log_directory(), 'input', 'sensor_data')
self.logging = configuration.get_logger()
self.Status = configuration.Status
self.get_input_status_fn = configuration.get_input_functioning_status
self.update_input_status = configuration.set_input_functioning_status
if configuration.get_input_state() == self.Status.ONLINE:
self._input_stream = Thread(target=self._spinup_sockets)
elif configuration.get_input_state() == self.Status.OFFLINE:
self._input_stream = Thread(target=self._read_files)
def start(self):
"""
Starts the input stream thread to begin consuming data from the sensors connected.
:return: True if thread hasn't been started, else False on multiple start fail.
"""
try:
self.update_input_status(self.Status.ONLINE)
self._input_stream.start()
self.logging.info('Successfully started Input Handler.')
except RuntimeError:
return False
return True
def status(self):
"""
Displays the status of the thread, useful for offline reporting.
"""
return self.get_input_status_fn()
def stop(self):
"""
Stops the input stream thread by ending the looping process.
"""
if self.get_input_status_fn() == self.Status.ONLINE:
self.logging.info('Closing Input Handler execution thread.')
self.update_input_status(self.Status.OFFLINE)
self._input_stream.join()
def _read_files(self):
pass
def _spinup_sockets(self):
"""
Enacts sockets onto their own thread to collect messages.
Ensures that blocking doesn't occur on the main thread.
"""
active_threads = {}
while self.get_input_status_fn() == self.Status.ONLINE:
# Check if any are online
if all([value['state'] == self.Status.OFFLINE for value in self.input_src.values()]):
self.update_input_status(self.Status.OFFLINE)
for active_thread in active_threads.values():
active_thread.join()
break
for key in self.input_src.keys():
# Check if key exists, if not, spin up call
if (key not in active_threads or not active_threads[key].isAlive()) and self.input_src[key]['state'] == self.Status.ONLINE:
active_threads[key] = Thread(target=self._read_socket, args=(key, active_threads,))
active_threads[key].start()
print(self.input_src)
def _read_socket(self, key, cache):
"""
Reads data from a socket, places message into the queue, and pop the key.
:param key: Key corresponding to socket.
:type key: UUID String
:param cache: Key cache that corresponds the key and various others.
:type cache: Dictionary
"""
message = None
try:
sensor_socket = self.input_src[key]['sensor']
...
message = json.loads(stream.getvalue().decode('utf-8'))
if 'submit_time' in message['msg'].keys():
# Inherited function
self.write_to_log_file(self.root_path + key, message, self.release_size)
message['key'] = key
parsed_submit_time = time.strptime(message['msg']['submit_time'], '%Y-%m-%dT%H:%M:%S.%fZ')
heapq.heappush(self.input_message_heap, (parsed_submit_time, message))
cache.pop(key)
except:
pass
Additional thought
When this error wasn't being thrown, the two threads share a common function as seen called write_to_log_file. Sometimes, an error would occur where when the write_to_log_file was checking if the OS has a specific directory, it would return False even though it was there. Could this be something with Python and accessing the same function at the same time, even for outside modules? The error was never consistent as well.
Overall, this error won't arise when only running a single thread/connection.
I'm trying to run RobotFramework with Python3.6's asyncio.
The relevant Python-Code looks as follows:
""" SampleProtTest.py """
import asyncio
import threading
class SubscriberClientProtocol(asyncio.Protocol):
"""
Generic, Asynchronous protocol that allows sending using a synchronous accessible queue
Based on http://stackoverflow.com/a/30940625/4150378
"""
def __init__(self, loop):
self.loop = loop
""" Functions follow for reading... """
class PropHost:
def __init__(self, ip: str, port: int = 50505) -> None:
self.loop = asyncio.get_event_loop()
self.__coro = self.loop.create_connection(lambda: SubscriberClientProtocol(self.loop), ip, port)
_, self.__proto = self.loop.run_until_complete(self.__coro)
# run the asyncio-loop in background thread
threading.Thread(target=self.runfunc).start()
def runfunc(self) -> None:
self.loop.run_forever()
def dosomething(self):
print("I'm doing something")
class SampleProtTest(object):
def __init__(self, ip='127.0.0.1', port=8000):
self._myhost = PropHost(ip, port)
def do_something(self):
self._myhost.dosomething()
if __name__=="__main__":
tester = SampleProtTest()
tester.do_something()
If I run this file in python, it prints, as expected:
I'm doing something
To run the code in Robot-Framework, I wrote the following .robot file:
*** Settings ***
Documentation Just A Sample
Library SampleProtTest.py
*** Test Cases ***
Do anything
do_something
But if I run this .robot-file, I get the following error:
Initializing test library 'SampleProtTest' with no arguments failed: This event loop is already running
Traceback (most recent call last):
File "SampleProtTest.py", line 34, in __init__
self._myhost = PropHost(ip, port)
File "SampleProtTest.py", line 21, in __init__
_, self.__proto = self.loop.run_until_complete(self.__coro)
File "appdata\local\programs\python\python36\lib\asyncio\base_events.py", line 454, in run_until_complete
self.run_forever()
File "appdata\local\programs\python\python36\lib\asyncio\base_events.py", line 408, in run_forever
raise RuntimeError('This event loop is already running')
Can someone explain to me why or how I can get around this?
Thank you very much!
EDIT
Thanks to #Dandekar I added some Debug outputs, see code above, and get the following output from robot:
- Loop until complete...
- Starting Thread...
- Running in thread...
==============================================================================
Sample :: Just A Sample
==============================================================================
Do anything - Loop until complete...
| FAIL |
Initializing test library 'SampleProtTest' with no arguments failed: This event loop is already running
Traceback (most recent call last):
File "C:\share\TestAutomation\SampleProtTest.py", line 42, in __init__
self._myhost = PropHost(ip, port)
File "C:\share\TestAutomation\SampleProtTest.py", line 24, in __init__
_, self.__proto = self.loop.run_until_complete(self.__coro)
File "c:\users\muechr\appdata\local\programs\python\python36\lib\asyncio\base_events.py", line 454, in run_until_complete
self.run_forever()
File "c:\users\muechr\appdata\local\programs\python\python36\lib\asyncio\base_events.py", line 408, in run_forever
raise RuntimeError('This event loop is already running')
------------------------------------------------------------------------------
Sample :: Just A Sample | FAIL |
1 critical test, 0 passed, 1 failed
1 test total, 0 passed, 1 failed
==============================================================================
Output: C:\share\TestAutomation\results\output.xml
Log: C:\share\TestAutomation\results\log.html
Report: C:\share\TestAutomation\results\report.html
As I see it, the problem is that it already started the Thread BEFORE the test case. Oddly, if I remove the line
_, self.__proto = self.loop.run_until_complete(self.__coro)
It seems to run through - but I can't explain why... But this is not a practical solution as I'm not able to access __proto like this...
Edit: Comment out the part where your code runs at start
# if __name__=="__main__":
# tester = SampleProtTest()
# tester.do_something()
That piece gets run when you import your script in robot framework (causing the port to be occupied).
Also: If you are simply trying to run keywords asynchronously, there is a library that does that (although I have not tried it myself).
robotframework-async
I can't get rpdb2 to run with python 3.3, while that should be possible according to several sources.
$ rpdb2 -d myscript.py
A password should be set to secure debugger client-server communication.
Please type a password:x
Password has been set.
Traceback (most recent call last):
File "/usr/local/bin/rpdb2", line 31, in <module>
rpdb2.main()
File "/usr/local/lib/python3.3/dist-packages/rpdb2.py", line 14470, in main
StartServer(_rpdb2_args, fchdir, _rpdb2_pwd, fAllowUnencrypted, fAllowRemote, secret)
File "/usr/local/lib/python3.3/dist-packages/rpdb2.py", line 14212, in StartServer
g_module_main = -1
File "/usr/local/lib/python3.3/dist-packages/rpdb2.py", line 14212, in StartServer
g_module_main = -1
File "/usr/local/lib/python3.3/dist-packages/rpdb2.py", line 7324, in trace_dispatch_init
self.__set_signal_handler()
File "/usr/local/lib/python3.3/dist-packages/rpdb2.py", line 7286, in __set_signal_handler
handler = signal.getsignal(value)
File "/usr/local/lib/python3.3/dist-packages/rpdb2.py", line 13682, in __getsignal
handler = g_signal_handlers.get(signum, g_signal_getsignal(signum))
ValueError: signal number out of range
The version of rpdb2 is RPDB 2.4.8 - Tychod.
I installed it by running pip-3.3 install winpdb.
Any clues?
Got the same problem today here is what i've done for it to work.
Still i'm not too sure if this is correct to do it this way.
From:
def __getsignal(signum):
handler = g_signal_handlers.get(signum, g_signal_getsignal(signum))
return handler
To:
def __getsignal(signum):
try:
# The problems come from the signum which was 0.
g_signal_getsignal(signum)
except ValueError:
return None
handler = g_signal_handlers.get(signum, g_signal_getsignal(signum))
return handler
This function shall be at line 13681 or something like.
The reason of a problem is the extended list of attributes in a signal module that was used by rpdb2 to list all signals. New python versions added attributes like SIG_BLOCK, SIG_UNBLOCK, SIG_SETMASK
So the filtering should be extended too (patch changes just one line):
--- rpdb2.py
+++ rpdb2.py
## -7278,11 +7278,11 ##
def __set_signal_handler(self):
"""
Set rpdb2 to wrap all signal handlers.
"""
for key, value in list(vars(signal).items()):
- if not key.startswith('SIG') or key in ['SIGRTMIN', 'SIGRTMAX'] or key.startswith('SIG_'):
+ if not key.startswith('SIG') or key in ['SIG_IGN', 'SIG_DFL', 'SIGRTMIN', 'SIGRTMAX']:
continue
handler = signal.getsignal(value)
if handler in [signal.SIG_IGN, signal.SIG_DFL]:
continue
Unfortunately there is no current official development or fork of winpdb, so by now this patch would be just stored on SO.