How can I configure a logstash agent that sends python logs to the redis broker?
I saw there's an option using "beaver" as a background deamon but I would rather choose a python module that configured to send it directly instead of doing it in the non-direct way.
Currently I'm using Python-Logstash, but I think it doesn't support inserting messages to Redis queue.
Related
I have a long-running server application written in Python 3, which I would like to interactively debug from time to time. For that I would like to run Python commands "within" the server application process, inspect the values of global variables etc., like in a REPL or the standardd Python console.
It seems the Python standard library's code module and its InteractiveConsole class seems to be what I am looking for. I was thinking of running that in a separate thread so that the main application is not blocked while I communicate with it.
However, it seems that class provides interaction via standard input and output. That might not be exactly what I need. Is there a way to make that interactive console listen / connect to a socket and send input and output through this socket, so that I can connect to the console via a TCP connection?
Or is there another, better way to implement my requirement without this code module?
I am attempting an integration test using pytest for my application developed in python3.7 and asyncio. The application is supposed to connect to a remote server and if the network fails, my application should detect this and attempt to reconnect at a specified interval. Typically, in my integration test, I have my remote server already running and listening on a TCP port. My application should connect to that port and I will check that the connection was successful. Then I need to simulate a network outage in which application looses connection to the server and test the behaviour of the application while the network is not operational and then I need to bring the network back online and confirm that the app will properly reconnect and perform it's tasks. For the purposes of my integration testing all this stuff is running on my localhost.
Does pytest already have something for this usecase or should I build some sort of proxy server myself? How would I go about doing this?
Pytest doesn't have any feature to simulate network failure because it's just test runner.
You need to use external mock-server that can emulate connection failure or long response time. For this purpose I use and recommend mock-server Mountebank: http://www.mbtest.org/
With Mountebank you will be able to emulate any response from remote server. You can manage Mountebank behavior directly by API or use client libraries: http://www.mbtest.org/docs/clientLibraries
I know this is an old question, and the solution is not related to pytest, but due to the way the title is formulated, it may be useful for people who come here for a similar reason. There is a library that simulates an internet failure. The original repository has a bug with the main library, but in the fork I have done it is fixed.
Original repository
My fork
I have a rails app. This app takes parameters from the users and sends it to what I think would be called a slave application server that runs heavy calculations in Python. This slave server is located on the same physical box and runs a Python "SimpleHTTPServer" (just a basic web server).
I set up the slave to receive commands through post requests and run the calculations. Is it appropriate for the python server to receive these requests through GET/POST even though it is on the same box or should I be using another protocol?
**note I have looked into rubypython and other direct connectors but I need a separate app server to run calculations.
I am new to Python - and work on Slackware Linux with Python 3.4.3. I prefer simple no-dependency solutions within one single python script.
I am building a demonized server program (A) which I need to access through both a regular shell CLI and GUIs in my web browser: it serves various files, uses a corresponding database and updates a firefox tab through python's WEBBROWSER function. Currently, I access process (A) via the CLI or a threaded network socket. This all started to work in a localhost scenario with all processes running on one machine.
Now, it turns out that the WebSocket protocol would render my setup dramatically simpler and cut short on traditional flow protocols using Apache and complex frameworks as middlemen.
1st central question: How do I access daemon (A) with websockets from the CLI? I thought about firing up a non-daemon version of my server program, now called (B), and send a program call to its (A) counterpart via the WebSocket HTTP protocol. This would make process (B) a websocket CLIENT, and process (A) a websocket SERVER. Is such a communication at all possible today?
2nd question: Which is the best suited template solution for this scenario - that works with python 3.4.3 ?! I started to play with Pithikos' very sleek python-websocket-server template (see https://github.com/Pithikos/python-websocket-server) but I am unable to use it as CLIENT (initiating the network call) to call its SERVER equivalent (receiving the call while residing in a daemonized process).
Problem 'solved': I gave up on the zero-dependency zero-library idea :
pip install websockets
https://websockets.readthedocs.io
It works like a charm. The WebSocket server sits in the daemon process and receives and processes WebSocket client calls that come from the CLI processes and from the HTML GUIs.
I have a celery setup and running fine using rabbitmq as the broker. I also have CELERY_SEND_TASK_ERROR_EMAILS=True in my settings. I receive emails if there is an Exception
thrown while executing the tasks which is fine.
My question is is there a way either with celery or rabbitmq, to receive an error notification from either celery if the broker connection cannot be established or rabbitmq itself if the rabbitmq-server running dies.
I think the right tool for this job is a process control system like supervisord, which launches/watches processes and can trigger events when those processes die or restart. More specifically, using the plugin superlance, you can send an email when a process dies.