RobotFramework with Python's asyncio - python

I'm trying to run RobotFramework with Python3.6's asyncio.
The relevant Python-Code looks as follows:
""" SampleProtTest.py """
import asyncio
import threading
class SubscriberClientProtocol(asyncio.Protocol):
"""
Generic, Asynchronous protocol that allows sending using a synchronous accessible queue
Based on http://stackoverflow.com/a/30940625/4150378
"""
def __init__(self, loop):
self.loop = loop
""" Functions follow for reading... """
class PropHost:
def __init__(self, ip: str, port: int = 50505) -> None:
self.loop = asyncio.get_event_loop()
self.__coro = self.loop.create_connection(lambda: SubscriberClientProtocol(self.loop), ip, port)
_, self.__proto = self.loop.run_until_complete(self.__coro)
# run the asyncio-loop in background thread
threading.Thread(target=self.runfunc).start()
def runfunc(self) -> None:
self.loop.run_forever()
def dosomething(self):
print("I'm doing something")
class SampleProtTest(object):
def __init__(self, ip='127.0.0.1', port=8000):
self._myhost = PropHost(ip, port)
def do_something(self):
self._myhost.dosomething()
if __name__=="__main__":
tester = SampleProtTest()
tester.do_something()
If I run this file in python, it prints, as expected:
I'm doing something
To run the code in Robot-Framework, I wrote the following .robot file:
*** Settings ***
Documentation Just A Sample
Library SampleProtTest.py
*** Test Cases ***
Do anything
do_something
But if I run this .robot-file, I get the following error:
Initializing test library 'SampleProtTest' with no arguments failed: This event loop is already running
Traceback (most recent call last):
File "SampleProtTest.py", line 34, in __init__
self._myhost = PropHost(ip, port)
File "SampleProtTest.py", line 21, in __init__
_, self.__proto = self.loop.run_until_complete(self.__coro)
File "appdata\local\programs\python\python36\lib\asyncio\base_events.py", line 454, in run_until_complete
self.run_forever()
File "appdata\local\programs\python\python36\lib\asyncio\base_events.py", line 408, in run_forever
raise RuntimeError('This event loop is already running')
Can someone explain to me why or how I can get around this?
Thank you very much!
EDIT
Thanks to #Dandekar I added some Debug outputs, see code above, and get the following output from robot:
- Loop until complete...
- Starting Thread...
- Running in thread...
==============================================================================
Sample :: Just A Sample
==============================================================================
Do anything - Loop until complete...
| FAIL |
Initializing test library 'SampleProtTest' with no arguments failed: This event loop is already running
Traceback (most recent call last):
File "C:\share\TestAutomation\SampleProtTest.py", line 42, in __init__
self._myhost = PropHost(ip, port)
File "C:\share\TestAutomation\SampleProtTest.py", line 24, in __init__
_, self.__proto = self.loop.run_until_complete(self.__coro)
File "c:\users\muechr\appdata\local\programs\python\python36\lib\asyncio\base_events.py", line 454, in run_until_complete
self.run_forever()
File "c:\users\muechr\appdata\local\programs\python\python36\lib\asyncio\base_events.py", line 408, in run_forever
raise RuntimeError('This event loop is already running')
------------------------------------------------------------------------------
Sample :: Just A Sample | FAIL |
1 critical test, 0 passed, 1 failed
1 test total, 0 passed, 1 failed
==============================================================================
Output: C:\share\TestAutomation\results\output.xml
Log: C:\share\TestAutomation\results\log.html
Report: C:\share\TestAutomation\results\report.html
As I see it, the problem is that it already started the Thread BEFORE the test case. Oddly, if I remove the line
_, self.__proto = self.loop.run_until_complete(self.__coro)
It seems to run through - but I can't explain why... But this is not a practical solution as I'm not able to access __proto like this...

Edit: Comment out the part where your code runs at start
# if __name__=="__main__":
# tester = SampleProtTest()
# tester.do_something()
That piece gets run when you import your script in robot framework (causing the port to be occupied).
Also: If you are simply trying to run keywords asynchronously, there is a library that does that (although I have not tried it myself).
robotframework-async

Related

assertRaises(AttributeError, ...) not working. Module 'myModule' has no attribute 'main' (Python Unit Test)

I'm new to Python. My first unit test doesn't work.
Here is my telegram.py:
#!/usr/bin/python3
import socket
import sys
import urllib.parse
import certifi
import pycurl
from io import BytesIO
# telegram API key and chat id
TELEGRAM_API_KEY = 'xxx'
TELEGRAM_CHAT_ID = 'xxx'
DEBUG_MODE: bool = False
# stuff to run always here such as class/def
def main(msg):
if not msg:
print("No message to be sent has been passed.")
exit(1)
def debug(debug_type, debug_msg):
if DEBUG_MODE:
print(f"debug({debug_type}): {debug_msg}")
def send_message(message):
print("sending telegram...")
c = pycurl.Curl()
if DEBUG_MODE:
c.setopt(pycurl.VERBOSE, 1)
c.setopt(pycurl.DEBUGFUNCTION, debug)
params = {
'chat_id': TELEGRAM_CHAT_ID,
'text': message
}
telegram_url = f"https://api.telegram.org/bot{TELEGRAM_API_KEY}/sendMessage?" + urllib.parse.urlencode(params)
c.setopt(pycurl.CAINFO, certifi.where())
storage = BytesIO()
c.setopt(c.WRITEDATA, storage)
c.setopt(c.URL, telegram_url)
c.perform()
c.close()
print(storage.getvalue())
send_message(f"{socket.gethostname()}: {msg}")
if __name__ == "__main__":
# stuff only to run when not called via 'import' here
if len(sys.argv) > 1:
main(sys.argv[1])
else:
print("No message to be sent has been passed.")
exit(1)
I want to test this script. I can call this script directly from shell with command line argument oder can call this script in another python script like telegram.main("Test Message").
My unit test doesn't work. I expect an AttributeError because I don't give the argument for telegram.main().
Here is the unit test:
import unittest
import telegram
import subprocess
class TelegramTestCase(unittest.TestCase):
"""Tests for 'telegram.py'"""
def test_empty_telegram(self):
"""call telegram directly without an argument"""
self.assertRaises(AttributeError, telegram.main())
def test_string_telegram(self):
"""call telegram directly with correct argument"""
telegram.main("TästString...123*ß´´OK")
self.assertTrue(True)
if __name__ == '__main__':
unittest.main()
The result of the first test case is:
Error Traceback (most recent call last): File
"C:\Users\XXX\AppData\Local\Programs\Python\Python38-32\lib\unittest\case.py",
line 60, in testPartExecutor
yield File "C:\Users\XXX\AppData\Local\Programs\Python\Python38-32\lib\unittest\case.py",
line 676, in run
self._callTestMethod(testMethod) File "C:\Users\XXX\AppData\Local\Programs\Python\Python38-32\lib\unittest\case.py",
line 633, in _callTestMethod
method() File "G:\Repositories\python\unittests\telegram.py", line 11, in test_empty_telegram
self.assertRaises(AttributeError, telegram.main()) AttributeError: module 'telegram' has no attribute 'main'
Ran 1 test in 0.003s
FAILED (errors=1)
Process finished with exit code 1
What is the problem here? I think telegram.main() and telegram.main("Test") can't be found but why?
I want to test telegram.main() and I am asserting an AttributeError because I don't give an argument so the test should be passed.
Thank you.
Update:
I changed the first test method to
def test_empty_telegram(self):
"""call telegram directly without an argument"""
with self.assertRaises(AttributeError):
telegram.main()
and it works. But the following test methods have the same errors.
def test_string_telegram(self):
"""call telegram directly with correct argument"""
with self.assertRaises(SystemExit) as cm:
telegram.main("TästString...123*ß´´OK")
self.assertEqual(cm.exception.code, 0)
Output:
Testing started at 22:43 ...
C:\Users\XXX\AppData\Local\Programs\Python\Python38-32\python.exe
C:\Users\XXX\AppData\Roaming\JetBrains\IntelliJIdea2020.2\plugins\python\helpers\pycharm_jb_unittest_runner.py
--target telegram.TelegramTestCase.test_string_telegram Launching unittests with arguments python -m unittest
telegram.TelegramTestCase.test_string_telegram in
G:\Repositories\python\unittests
Error Traceback (most recent call last): File
"C:\Users\XXX\AppData\Local\Programs\Python\Python38-32\lib\unittest\case.py",
line 60, in testPartExecutor
yield File "C:\Users\XXX\AppData\Local\Programs\Python\Python38-32\lib\unittest\case.py",
line 676, in run
self._callTestMethod(testMethod) File "C:\Users\XXX\AppData\Local\Programs\Python\Python38-32\lib\unittest\case.py",
line 633, in _callTestMethod
method() File "G:\Repositories\python\unittests\telegram.py", line 17, in test_string_telegram
telegram.main("TästString...123*ß´´OK") AttributeError: module 'telegram' has no attribute 'main'
Ran 1 test in 0.003s
FAILED (errors=1)
Process finished with exit code 1
Assertion failed
Assertion failed
Assertion failed
Update 2
The problem was that I named my module telegram.py. The test file has got the same name in a subfolder. The problem is solved by renaming the test file. Thanks to the helping commentators!
As assertRaises needs a function object as second argument, not the call result, just add lambda: in front of telegram.main() like this:
def test_empty_telegram(self):
self.assertRaises(AttributeError, lambda: telegram.main())

How to run Locust with multiprocessing on a single machine

I want Locust to use all cores on my PC.
I have many Locust classes and I want to use Locust as a library.
Example of my code:
import gevent
from locust.env import Environment
from locust.stats import stats_printer
from locust.log import setup_logging
import time
from locust import HttpUser, TaskSet, task, between
def index(l):
l.client.get("/")
def stats(l):
l.client.get("/stats/requests")
class UserTasks(TaskSet):
# one can specify tasks like this
tasks = [index, stats]
# but it might be convenient to use the #task decorator
#task
def page404(self):
self.client.get("/does_not_exist")
class WebsiteUser(HttpUser):
"""
User class that does requests to the locust web server running on localhost
"""
host = "http://127.0.0.1:8089"
wait_time = between(2, 5)
tasks = [UserTasks]
def worker():
env2 = Environment(user_classes=[WebsiteUser])
env2.create_worker_runner(master_host="127.0.0.1", master_port=50013)
# env2.runner.start(10, hatch_rate=1)
env2.runner.greenlet.join()
def master():
env1 = Environment(user_classes=[WebsiteUser])
env1.create_master_runner(master_bind_host="127.0.0.1", master_bind_port=50013)
env1.create_web_ui("127.0.0.1", 8089)
env1.runner.start(20, hatch_rate=4)
env1.runner.greenlet.join()
import multiprocessing
from multiprocessing import Process
import time
procs = []
proc = Process(target=master)
procs.append(proc)
proc.start()
time.sleep(5)
for i in range(multiprocessing.cpu_count()):
proc = Process(target=worker) # instantiating without any argument
procs.append(proc)
proc.start()
for process in procs:
process.join()
This code doesn't work correctly.
(env) ➜ test_locust python main3.py
You are running in distributed mode but have no worker servers connected. Please connect workers prior to swarming.
Traceback (most recent call last):
File "src/gevent/greenlet.py", line 854, in gevent._gevent_cgreenlet.Greenlet.run
File "/home/alex/projects/performance/env/lib/python3.6/site-packages/locust/runners.py", line 532, in client_listener
client_id, msg = self.server.recv_from_client()
File "/home/alex/projects/performance/env/lib/python3.6/site-packages/locust/rpc/zmqrpc.py", line 44, in recv_from_client
msg = Message.unserialize(data[1])
File "/home/alex/projects/performance/env/lib/python3.6/site-packages/locust/rpc/protocol.py", line 18, in unserialize
msg = cls(*msgpack.loads(data, raw=False, strict_map_key=False))
File "msgpack/_unpacker.pyx", line 161, in msgpack._unpacker.unpackb
TypeError: unpackb() got an unexpected keyword argument 'strict_map_key'
2020-08-13T11:21:10Z <Greenlet at 0x7f8cf300c848: <bound method MasterRunner.client_listener of <locust.runners.MasterRunner object at 0x7f8cf2f531d0>>> failed with TypeError
Unhandled exception in greenlet: <Greenlet at 0x7f8cf300c848: <bound method MasterRunner.client_listener of <locust.runners.MasterRunner object at 0x7f8cf2f531d0>>>
Traceback (most recent call last):
File "src/gevent/greenlet.py", line 854, in gevent._gevent_cgreenlet.Greenlet.run
File "/home/alex/projects/performance/env/lib/python3.6/site-packages/locust/runners.py", line 532, in client_listener
client_id, msg = self.server.recv_from_client()
File "/home/alex/projects/performance/env/lib/python3.6/site-packages/locust/rpc/zmqrpc.py", line 44, in recv_from_client
msg = Message.unserialize(data[1])
File "/home/alex/projects/performance/env/lib/python3.6/site-packages/locust/rpc/protocol.py", line 18, in unserialize
msg = cls(*msgpack.loads(data, raw=False, strict_map_key=False))
File "msgpack/_unpacker.pyx", line 161, in msgpack._unpacker.unpackb
TypeError: unpackb() got an unexpected keyword argument 'strict_map_key'
ACTUAL RESULT: workers do not connect to the master and run users without a master
EXPECTED RESULT: workers run only with the master.
What is wrong?
You cannot use multiprocessing together with Locust/gevent (or at least it is known to cause issues).
Please spawn separate processes using subprocess or something completely external to locust. Perhaps you could modify locust-swarm (https://github.com/SvenskaSpel/locust-swarm) to make it able to run worker processes on the same machine.
I faced the same issue today, and since I didn't found a better option
I've add is like the following:
#events.init_command_line_parser.add_listener
def add_processes_arguments(parser: configargparse.ArgumentParser):
processes = parser.add_argument_group("start multiple worker processes")
processes.add_argument(
"--processes",
"-p",
action="store_true",
help="start slave processes to start",
env_var="LOCUST_PROCESSES",
default=False,
)
#events.init.add_listener
def on_locust_init(environment, **kwargs): # pylint: disable=unused-argument
if (
environment.parsed_options.processes
and environment.parsed_options.master
and environment.parsed_options.expect_workers
):
environment.worker_processes = []
master_args = [*sys.argv]
worker_args = [sys.argv[0]]
if "-f" in master_args:
i = master_args.index("-f")
worker_args += [master_args.pop(i), master_args.pop(i)]
if "--locustfile" in master_args:
i = master_args.index("--locustfile")
worker_args += [master_args.pop(i), master_args.pop(i)]
worker_args += ["--worker"]
for _ in range(environment.parsed_options.expect_workers):
p = subprocess.Popen( # pylint: disable=consider-using-with
worker_args, start_new_session=True
)
environment.worker_processes.append(p)
can see the rest of the code here:
https://github.com/fruch/hydra-locust/blob/master/common.py#L27
and run it from command line like this:
locust -f locustfile.py --host 172.17.0.2 --headless --users 1000 -t 1m -r 100 --master --expect-workers 2 --csv=example --processes

python luigi : requires() can not return Target objects

I'm really new to Luigi and I would like to set up luigi to execute my API calls.
I'm working with MockFiles since the json object that I retrieve through API are light and I want to avoid to use an external database.
This is my code :
import luigi
from luigi import Task, run as runLuigi, mock as LuigiMock
import yaml
class getAllCountries(Task):
task_complete = False
def requires(self):
return LuigiMock.MockFile("allCountries")
def run(self):
sync = Sync()
# Get list of all countries
countries = sync.getAllCountries()
if(countries is None or len(countries) == 0):
Logger.error("Sync terminated. The country array is null")
object_to_send = yaml.dump(countries)
_out = self.output().open('r')
_out.write(object_to_send)
_out.close()
task_complete = True
def complete(self):
return self.task_complete
class getActiveCountries(Task):
task_complete = False
def requires(self):
return getAllCountries()
def run(self):
_in = self.input().read('r')
serialised = _in.read()
countries = yaml.load(serialised)
doSync = DoSync()
activeCountries = doSync.getActiveCountries(countries)
if(activeCountries is None or len(activeCountries) == 0):
Logger.error("Sync terminated. The active country account array is null")
task_complete = True
def complete(self):
return self.task_complete
if __name__ == "__main__":
runLuigi()
I'm running the project with the following command :
PYTHONPATH='.' luigi --module app getActiveCountries --workers 2 --local-scheduler
And it fails, and this is the stacktrace that I got :
DEBUG: Checking if getActiveCountries() is complete
DEBUG: Checking if getAllCountries() is complete
INFO: Informed scheduler that task getActiveCountries__99914b932b has status PENDING
ERROR: Luigi unexpected framework error while scheduling getActiveCountries()
Traceback (most recent call last):
File "/Users/thibaultlr/anaconda3/envs/testThib/lib/python3.6/site-packages/luigi/worker.py", line 763, in add
for next in self._add(item, is_complete):
File "/Users/thibaultlr/anaconda3/envs/testThib/lib/python3.6/site-packages/luigi/worker.py", line 861, in _add
self._validate_dependency(d)
File "/Users/thibaultlr/anaconda3/envs/testThib/lib/python3.6/site-packages/luigi/worker.py", line 886, in _validate_dependency
raise Exception('requires() can not return Target objects. Wrap it in an ExternalTask class')
Exception: requires() can not return Target objects. Wrap it in an ExternalTask class
INFO: Worker Worker(salt=797067816, workers=2, host=xxx, pid=85795) was stopped. Shutting down Keep-Alive thread
ERROR: Uncaught exception in luigi
Traceback (most recent call last):
File "/Users/thibaultlr/anaconda3/envs/testThib/lib/python3.6/site-packages/luigi/retcodes.py", line 75, in run_with_retcodes
worker = luigi.interface._run(argv).worker
File "/Users/thibaultlr/anaconda3/envs/testThib/lib/python3.6/site-packages/luigi/interface.py", line 211, in _run
return _schedule_and_run([cp.get_task_obj()], worker_scheduler_factory)
File "/Users/thibaultlr/anaconda3/envs/testThib/lib/python3.6/site-packages/luigi/interface.py", line 171, in _schedule_and_run
success &= worker.add(t, env_params.parallel_scheduling, env_params.parallel_scheduling_processes)
File "/Users/thibaultlr/anaconda3/envs/testThib/lib/python3.6/site-packages/luigi/worker.py", line 763, in add
for next in self._add(item, is_complete):
File "/Users/thibaultlr/anaconda3/envs/testThib/lib/python3.6/site-packages/luigi/worker.py", line 861, in _add
self._validate_dependency(d)
File "/Users/thibaultlr/anaconda3/envs/testThib/lib/python3.6/site-packages/luigi/worker.py", line 886, in _validate_dependency
raise Exception('requires() can not return Target objects. Wrap it in an ExternalTask class')
Exception: requires() can not return Target objects. Wrap it in an ExternalTask class
Also, i'm running the luigid in background and I do not see any tasks that ran on it. Neither if it failed or not
Any ideas ?
Firstly, you are not seeing anything happen within the luigi daemon, because in your PYTHONPATH you specify --local-scheduler. This disregards the daemon entirely and just runs the scheduler on the local process.
Second, in the getAllCountries task, you are specifying a Target as a requirement, when it should be in your output function. Once you've changed if from:
def requires(self):
return LuigiMock.MockFile("allCountries")
to:
def output(self):
return LuigiMock.MockFile("allCountries")
you won't need to redefine the complete function or set task_complete to True, because luigi will determine the task is complete by the presence of the output file. To find out more about targets take a look at: https://luigi.readthedocs.io/en/stable/workflows.html#target
Side note: You can make this section:
_out = self.output().open('r')
_out.write(object_to_send)
_out.close()
a lot easier and less prone to bugs by just using Python's with functionality.
with self.output().open('r') as _out:
_out.write(object_to_send)
Python will automatically close the file when exiting the with scope and on error.
Second side note: Don't use luigi's run. It is deprecated. Use luigi.build instead.

Python 'module' object has no attribute '_strptime'

This error has been arising for quite sometime but doesn't like to appear very frequently, it's time I squashed it.
I see that it appears whenever I have more than a single thread. The applications quite extensive so I'll be posting the code snippet down below.
I'm using the datetime.datetime strptime to format my message into a datetime object. When I use this within a multithreaded function, that error arises on the one thread but works perfectly fine on the other.
The error
Exception in thread Thread-7:
Traceback (most recent call last):
File "C:\Users\kdarling\AppData\Local\Continuum\anaconda3\envs\lobsandbox\lib\threading.py", line 801, in __bootstrap_inner
self.run()
File "C:\Users\kdarling\AppData\Local\Continuum\anaconda3\envs\lobsandbox\lib\threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "d:\kdarling_lob\gryphon_eagle\proj_0012_s2_lobcorr\main\lobx\src\correlator\handlers\inputhandler.py", line 175, in _read_socket
parsed_submit_time = datetime.datetime.strptime(message['msg']['submit_time'], '%Y-%m-%dT%H:%M:%S.%fZ')
AttributeError: 'module' object has no attribute '_strptime'
Alright so this is a bit strange as this was called on the first thread but the second thread works completely fine by using datetime.datetime.
Thoughts
Am I overwriting anything? It is Python afterall. Nope, I don't use datetime anywhere.
I am using inheritance through ABC, how about the parent? Nope, this bug was happening long before and I don't overwrite anything in the parent.
My next go to was thinking "Is is this Python blackmagic with the Datetime module?" in which I decided to time.strptime.
The error
Exception in thread Thread-6:
Traceback (most recent call last):
File "C:\Users\kdarling\AppData\Local\Continuum\anaconda3\envs\lobsandbox\lib\threading.py", line 801, in __bootstrap_inner
self.run()
File "C:\Users\kdarling\AppData\Local\Continuum\anaconda3\envs\lobsandbox\lib\threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "d:\kdarling_lob\gryphon_eagle\proj_0012_s2_lobcorr\main\lobx\src\correlator\handlers\inputhandler.py", line 176, in _read_socket
parsed_submit_time = time.strptime(message['msg']['submit_time'], '%Y-%m-%dT%H:%M:%S.%fZ')
AttributeError: 'module' object has no attribute '_strptime_time'
The code
import datetime
import heapq
import json
import os
import socket
import sys
import time
from io import BytesIO
from threading import Thread
from handler import Handler
class InputStreamHandler(Handler):
def __init__(self, configuration, input_message_heap):
"""
Initialization function for InputStreamHandler.
:param configuration: Configuration object that stores specific information.
:type configuration: Configuration
:param input_message_heap: Message heap that consumers thread will populate.
:type input_message_heap: Heap
"""
super(InputStreamHandler, self).__init__()
self.release_size = configuration.get_release_size()
self.input_src = configuration.get_input_source()
self.input_message_heap = input_message_heap
self.root_path = os.path.join(configuration.get_root_log_directory(), 'input', 'sensor_data')
self.logging = configuration.get_logger()
self.Status = configuration.Status
self.get_input_status_fn = configuration.get_input_functioning_status
self.update_input_status = configuration.set_input_functioning_status
if configuration.get_input_state() == self.Status.ONLINE:
self._input_stream = Thread(target=self._spinup_sockets)
elif configuration.get_input_state() == self.Status.OFFLINE:
self._input_stream = Thread(target=self._read_files)
def start(self):
"""
Starts the input stream thread to begin consuming data from the sensors connected.
:return: True if thread hasn't been started, else False on multiple start fail.
"""
try:
self.update_input_status(self.Status.ONLINE)
self._input_stream.start()
self.logging.info('Successfully started Input Handler.')
except RuntimeError:
return False
return True
def status(self):
"""
Displays the status of the thread, useful for offline reporting.
"""
return self.get_input_status_fn()
def stop(self):
"""
Stops the input stream thread by ending the looping process.
"""
if self.get_input_status_fn() == self.Status.ONLINE:
self.logging.info('Closing Input Handler execution thread.')
self.update_input_status(self.Status.OFFLINE)
self._input_stream.join()
def _read_files(self):
pass
def _spinup_sockets(self):
"""
Enacts sockets onto their own thread to collect messages.
Ensures that blocking doesn't occur on the main thread.
"""
active_threads = {}
while self.get_input_status_fn() == self.Status.ONLINE:
# Check if any are online
if all([value['state'] == self.Status.OFFLINE for value in self.input_src.values()]):
self.update_input_status(self.Status.OFFLINE)
for active_thread in active_threads.values():
active_thread.join()
break
for key in self.input_src.keys():
# Check if key exists, if not, spin up call
if (key not in active_threads or not active_threads[key].isAlive()) and self.input_src[key]['state'] == self.Status.ONLINE:
active_threads[key] = Thread(target=self._read_socket, args=(key, active_threads,))
active_threads[key].start()
print(self.input_src)
def _read_socket(self, key, cache):
"""
Reads data from a socket, places message into the queue, and pop the key.
:param key: Key corresponding to socket.
:type key: UUID String
:param cache: Key cache that corresponds the key and various others.
:type cache: Dictionary
"""
message = None
try:
sensor_socket = self.input_src[key]['sensor']
...
message = json.loads(stream.getvalue().decode('utf-8'))
if 'submit_time' in message['msg'].keys():
# Inherited function
self.write_to_log_file(self.root_path + key, message, self.release_size)
message['key'] = key
parsed_submit_time = time.strptime(message['msg']['submit_time'], '%Y-%m-%dT%H:%M:%S.%fZ')
heapq.heappush(self.input_message_heap, (parsed_submit_time, message))
cache.pop(key)
except:
pass
Additional thought
When this error wasn't being thrown, the two threads share a common function as seen called write_to_log_file. Sometimes, an error would occur where when the write_to_log_file was checking if the OS has a specific directory, it would return False even though it was there. Could this be something with Python and accessing the same function at the same time, even for outside modules? The error was never consistent as well.
Overall, this error won't arise when only running a single thread/connection.

winpdb not working with python 3.3

I can't get rpdb2 to run with python 3.3, while that should be possible according to several sources.
$ rpdb2 -d myscript.py
A password should be set to secure debugger client-server communication.
Please type a password:x
Password has been set.
Traceback (most recent call last):
File "/usr/local/bin/rpdb2", line 31, in <module>
rpdb2.main()
File "/usr/local/lib/python3.3/dist-packages/rpdb2.py", line 14470, in main
StartServer(_rpdb2_args, fchdir, _rpdb2_pwd, fAllowUnencrypted, fAllowRemote, secret)
File "/usr/local/lib/python3.3/dist-packages/rpdb2.py", line 14212, in StartServer
g_module_main = -1
File "/usr/local/lib/python3.3/dist-packages/rpdb2.py", line 14212, in StartServer
g_module_main = -1
File "/usr/local/lib/python3.3/dist-packages/rpdb2.py", line 7324, in trace_dispatch_init
self.__set_signal_handler()
File "/usr/local/lib/python3.3/dist-packages/rpdb2.py", line 7286, in __set_signal_handler
handler = signal.getsignal(value)
File "/usr/local/lib/python3.3/dist-packages/rpdb2.py", line 13682, in __getsignal
handler = g_signal_handlers.get(signum, g_signal_getsignal(signum))
ValueError: signal number out of range
The version of rpdb2 is RPDB 2.4.8 - Tychod.
I installed it by running pip-3.3 install winpdb.
Any clues?
Got the same problem today here is what i've done for it to work.
Still i'm not too sure if this is correct to do it this way.
From:
def __getsignal(signum):
handler = g_signal_handlers.get(signum, g_signal_getsignal(signum))
return handler
To:
def __getsignal(signum):
try:
# The problems come from the signum which was 0.
g_signal_getsignal(signum)
except ValueError:
return None
handler = g_signal_handlers.get(signum, g_signal_getsignal(signum))
return handler
This function shall be at line 13681 or something like.
The reason of a problem is the extended list of attributes in a signal module that was used by rpdb2 to list all signals. New python versions added attributes like SIG_BLOCK, SIG_UNBLOCK, SIG_SETMASK
So the filtering should be extended too (patch changes just one line):
--- rpdb2.py
+++ rpdb2.py
## -7278,11 +7278,11 ##
def __set_signal_handler(self):
"""
Set rpdb2 to wrap all signal handlers.
"""
for key, value in list(vars(signal).items()):
- if not key.startswith('SIG') or key in ['SIGRTMIN', 'SIGRTMAX'] or key.startswith('SIG_'):
+ if not key.startswith('SIG') or key in ['SIG_IGN', 'SIG_DFL', 'SIGRTMIN', 'SIGRTMAX']:
continue
handler = signal.getsignal(value)
if handler in [signal.SIG_IGN, signal.SIG_DFL]:
continue
Unfortunately there is no current official development or fork of winpdb, so by now this patch would be just stored on SO.

Categories

Resources