Crashing MR-3020 - python

I've got several MR-3020's that I have flashed with OpenWRT and mounted a 16GB ext4 USB drive on it. Upon boot, a daemon shell script is started which does two things:
1) It constantly looks to see if my main program is running and if not starts up the python script
2) It compares the lasts heartbeat timestamp generated by my main program and if it is older than 10 minutes in the past kills the python process. #1 is then supposed to restart it.
Once running, my main script goes into monitor mode and collects packet information. It periodically stops sniffing, connects to the internet and uploads the data to my server, saves the heartbeat timestamp and then goes back into monitor mode.
This will run for a couple hours, days, or even a few weeks but always seems to die at some point. I've been having this issue for nearly 6 months (not exclusively) I've run out of ideas. I've got files for error, info and debug level logging on pretty much every line in the python script. The amount of memory used by the python process seems to hold steady. All network calls are encapsulated in try/catch statements. The daemon writes to logread. Even with all that logging, I can't seem to track down what the issue might be. There doesn't seem to be any endless loops entered into, none of the errors (usually HTTP request when not connected to internet yet) are ever the final log record - the device just seems to freeze up randomly.
Any advice on how to further track this down?

It could be related to many things: things that I had to fix also: check the external power supply of the router, needs to be stable, the usb drives could drain too much current than the port can handle, a simple fix is to add a externally powered usbhub or the same port but with capacitors in parallel to the powerline and at the beginning of the usb port where the drive is, maybe 1000uF

Related

digi.xbee module is really slow when using threading

I have two devices connected to my pc, and I want to configure my DigiMesh device (one is on transparent mode and one is on API mode) I want to send Remote AT commands to the transparent device and AT commands to the API mode device.
In my system I first open the connection to classify the devices to find the MAC address and COM automatically for this, I open a connection to them, get the data and close it.
when I run my test script that configures them, after opening a couple of threads, the time of the AT and Remote AT commands increases drastically, and also is not very stable and sometimes doesn't work, in addition to the opening of the device (DigiMeshDevice.open()) takes a long time and most of the time, timeouts.
When I run it before the threads it works perfectly.
When looking in the task manager before running the threads and after, the CPU usage didn't go up by much, so I don't think it's a resource problem, do you have any idea what can cause this? Do I need to move to multiprocessing instead? Can this be related to the USB drivers or something like that?

How can I find out which program is sending data out, and what the data is?

I have a connection to a websocket server that some python code runs, and after some time (doesn't happen right away), it begins to send massive amounts of data "out" as shown in the screenshot. It's aroudn 4.8 megabytes per second on average. Prior to it starting to do this, it sends maybe 30 kb/s in what I assume is normal operation.
It can add up to hundreds of gigabytes in a day or two, depending on network speeds.
If I kill the original python process that was running and using the websocket client (kill -9 1234), it does not stop the traffic. I'm using Little Snitch for MacOS to obtain this information, and I feel like there is more info available under the surface I should be able to get to find out what's sending/receiving this data and what the data is.
If I terminate iTerm itself, the traffic still doesn't stop. If I launch regular Mac Terminal and do a ps aux | grep iterm I get nothing, and I get the same 4 processes shown in the screen shot if I do a | grep python...
This kind of throughput is really high, it's enough to be streaming my screen, uploading my entire hard drive, etc. or maybe it's just a code bug and it's sending garbage.
The only other relevant things I can think of adding right now are:
This is a brand new Macbook Pro M1 chip 13"
According to MenuMeters (resource monitor) there have been 16 billion page faults.. no idea if that's normal.
I have tried testing this by rebooting and just "not launching" my python code to get the websocket data, and basically I can wait a while and nothing seems to happen, so I think it's only happened after I launch the connection, and then I wait a while.
Sorry, I wish I knew how to get more relevant information for you, but if anyone has a good idea of how I can generate better logs or dig deeper I'd appreciate it.
screenshot of little snitch traffic
Sorry, I wish I knew how to get more relevant information for you, but
if anyone has a good idea of how I can generate better logs or dig
deeper I'd appreciate it.
Wireshark will allow you to track all connections and check what is inside the packets.

Icecast stream switching sources without the clients stopping playback

Good morning :)
I have some questions regarding an icecast-setup.
We have my own (Nextcloud)-Server sitting in our church. This works perfectly fine and since here in Germany the community services got forbidden already once during all this Covid19-stuff, we want to have a stream-setup. I managed to set up an icecast-instance on our server and we use my old laptop with Rocket Broadcaster and a Steinberg CI2 to provide the source for the stream. That works all as intended. We already used it once because we stopped public services for two weeks after one of our member got tested positive after he went abroad for a week.
Our operator on the PA doesn't want to have another display there, which would disturb him from listening the sermon.
My project regarding this: I have a RPi4 and a Behringer U-Phoria UMC202HD. The input atm is a standard mic, that is connected to the Interface.
The Pi is configured with darkice and uses its own icecast installation, while I am testing everything at home. Since I started the streaming project, after the service we switched Rocket Broadcaster to use VLC and a folder of old service recordings (mastered and in MP3-format) to provide a source to listen while we are at work or on travels. This option gets used pretty regularly and I want to keep it going.
My plan is to have a little box with a LED levelmeter, where the input level is shown. That should be done with a little python script. Also I want to add two buttons, where presets for the two setups get loaded. Button 1: kill the current source and start the darkice livestream. Button 2: kill the current source and start the playback of old recordings. Both options should have visual feedback for the operator. The raspberry has to work in headless mode without need for a VNC or SSH-connection for the normal usage.
My problem: I tried:
sudo killall darkice && /home/pi/darkice.sh
This code will get changed, because I probably have to use ices for the mp3-playback. So basically it kills darkice, starts the ices playback (for now only restarts the stream in a blink) and vice versa.
The bash-file exists and gets executed at reboot via cronjob. That works well. When I execute the above mentioned killall command, icecast continues the broadcast almost instantly, but the stream stops on the clients. Everyone needs to restart it.
Do I have an option to change the setup, so that I am able to switch between the two options without the need for everyone to restart?
My plan was to create a bash-script, where I do this all inside and execute it via GPIO-input and pythoncode.
Thanks in advance!

Is it standard practice to keep a FIX connection connected all day long, or relogin periodically?

I wrote a program in Python using the quickfix package which connects to a vendor via FIX. We login in the morning, but don't actually send messages through the connection until the end of the day. The issue is, we don't want to keep the program open for the entirety of the day, but would rather relogin in the afternoon when we need to send the messages.
The vendor is requesting we stay logged in for the full duration between our start and stop times specified in our configurations. This is only possible by leaving my program on for the entirety of the day, because if I close it then the messages the vendor sends aren't registered as received by me. I don't send a logout message though.
Is it common practice to write a program to connect via FIX and leave it running for the entire session time? Or is it acceptable to close the program, given I don't send a logout message, and reconnect at a later time in the day?
Any design or best practice advice would be helpful here.
Is it common practice to write a program to connect via FIX and leave it running for the entire session time? Or is it acceptable to close the program, given I don't send a logout message, and reconnect at a later time in the day?
I don't know what others have done, but I used QuickFIX with Python for years and never had any problem running my system all day, OR shutting it down periodically for whatever reason and reconnecting. In the end I wound up leaving the system connected for weeks at a time, since that allowed me to record data.
I would say that the answer to both of your questions is YES. It is common to leave it running. Also, it is acceptable to just close the program.
There can always be edge cases and idiosyncratic features of your implementation and your counterparty, so you should seek to understand more why they have asked you not to disconnect. That sounds very strange to me. Is their FIX engine not capable of something very simple and standard?
Yes it is common to keep the FIX sessions running for a long time. That should not be an issue.
You can't just shutdown your program your end, as Session-level FIX.Heartbeat(35=0) messages, sent periodically (usually 30s), as meant to keep the underlying TCP connection "open", and check that both ends are still up and running properly.
By the details you gave, if your vendor (which is likely the acceptor side) requests it, it might be because they need to send you messages, with no delay as they occur.
If you (the initiator side) are not logged in at that time, they won't be able to send those messages, as they won't be able to initiate a session with you.
The vendor might monitor sessions as well, but as an initiator, it sounds odd.
as initiators are waiting for connections.
More likely they will monitor unexpected sessions drops.
All in all, it very depends of your vendor anyway, you have to follow what they say...

Raspberry pi - AdaFruit DHT in wsgi script (non-root)

Using the ADAFRUIT_DHT python library from https://github.com/adafruit/Adafruit_Python_DHT and a DHT22 temp/humidity sensor (https://www.adafruit.com/products/393) I'm able to easily read temperature and humidity values.
The problem is, that I need to run my script as root, in order to interact with the GPIO pins. This is simply not possible, when running my script through a website, via wsgi, as apache will not let me (for good reason) set the WSGIDaemonProcess's user to root.
I've got pigpiod running which allows me to interact with the GPIO through it, as a non-root user, however, the ADAFRUIT_DHT doesn't go through the daemon, and interacts directly with the GPIO. I'm not 100% sure the pigpio daemon would be fast enough for the bit-banging, required to decode the response from the DHT22 unit, but, perhaps.
So, is there a way for me to coerce the ADAFRUIT_DHT library to not require being run as root, or, are there alternatives to the library available that might accomplish what I'm looking for?
Create a small server that runs as root and listens on a local Unix or TCP socket. When another process connects, your server reads the data from sensor and returns it.
Now your WSGI process only needs permissions to connect to the listening socket, which can be easily managed either via permissions on a Unix socket, or simply throwing access control to the wind and opening a TCP socket bound to the localhost address (so that only processes on the local machine can connect).
There are several advantages to doing this...for example, you can now have multiple programs consuming the temperature data at the same time, and not need to worry about device contention (because only the temperature server is actually reading the data). You can even implement short-term caching to provide faster responses.
Also, note that there is a raspberry pi specific stackexchange.
pigpio can certainly read the DHT11/22 etc. sensors.
There are two examples using the daemon (which means root privileges are not required).
DHT11/21/22/33/44 Sensor written in C which auto detects the model.
DHT22 written in Python which only handles the DHT22 (the github has a DHT11 example).
Both examples are likely to give reliable results (read error rates of better than 1 in 10 thousand rather than worse than 1 in 10).

Categories

Resources