Using the ADAFRUIT_DHT python library from https://github.com/adafruit/Adafruit_Python_DHT and a DHT22 temp/humidity sensor (https://www.adafruit.com/products/393) I'm able to easily read temperature and humidity values.
The problem is, that I need to run my script as root, in order to interact with the GPIO pins. This is simply not possible, when running my script through a website, via wsgi, as apache will not let me (for good reason) set the WSGIDaemonProcess's user to root.
I've got pigpiod running which allows me to interact with the GPIO through it, as a non-root user, however, the ADAFRUIT_DHT doesn't go through the daemon, and interacts directly with the GPIO. I'm not 100% sure the pigpio daemon would be fast enough for the bit-banging, required to decode the response from the DHT22 unit, but, perhaps.
So, is there a way for me to coerce the ADAFRUIT_DHT library to not require being run as root, or, are there alternatives to the library available that might accomplish what I'm looking for?
Create a small server that runs as root and listens on a local Unix or TCP socket. When another process connects, your server reads the data from sensor and returns it.
Now your WSGI process only needs permissions to connect to the listening socket, which can be easily managed either via permissions on a Unix socket, or simply throwing access control to the wind and opening a TCP socket bound to the localhost address (so that only processes on the local machine can connect).
There are several advantages to doing this...for example, you can now have multiple programs consuming the temperature data at the same time, and not need to worry about device contention (because only the temperature server is actually reading the data). You can even implement short-term caching to provide faster responses.
Also, note that there is a raspberry pi specific stackexchange.
pigpio can certainly read the DHT11/22 etc. sensors.
There are two examples using the daemon (which means root privileges are not required).
DHT11/21/22/33/44 Sensor written in C which auto detects the model.
DHT22 written in Python which only handles the DHT22 (the github has a DHT11 example).
Both examples are likely to give reliable results (read error rates of better than 1 in 10 thousand rather than worse than 1 in 10).
Related
I have two devices connected to my pc, and I want to configure my DigiMesh device (one is on transparent mode and one is on API mode) I want to send Remote AT commands to the transparent device and AT commands to the API mode device.
In my system I first open the connection to classify the devices to find the MAC address and COM automatically for this, I open a connection to them, get the data and close it.
when I run my test script that configures them, after opening a couple of threads, the time of the AT and Remote AT commands increases drastically, and also is not very stable and sometimes doesn't work, in addition to the opening of the device (DigiMeshDevice.open()) takes a long time and most of the time, timeouts.
When I run it before the threads it works perfectly.
When looking in the task manager before running the threads and after, the CPU usage didn't go up by much, so I don't think it's a resource problem, do you have any idea what can cause this? Do I need to move to multiprocessing instead? Can this be related to the USB drivers or something like that?
I'm currently working on gateway with an embedded Linux and a Webserver. The goal of the gateway is to retrieve data from electrical devices through a RS485/Modbus line, and to display them on a server.
I'm using Nginx and Django, and the web front-end is delivered by "static" files. Repeatedly, a Javascript script file makes AJAX calls that send CGI requests to Nginx. These CGI requests are answered with JSON responses thanks to Django. The responses are mostly data that as been read on the appropriate Modbus device.
The exact path is the following :
Randomly timed CGI call -> urls.py -> ModbusCGI.py (import an other script ModbusComm.py)-> ModbusComm.py create a Modbus client and instantly try to read with it.
Next to that, I wanted to implement a Datalogger, to store data in a DB at regular intervals. I made a script that also import the ModbusComm.py script, but it doesn't work : sometime multiple Modbus frames are sent at the same time (datalogger and cgi scripts call the same function in ModbusComm.py "files" at the same time) which results in an error.
I'm sure this problem would also occur if there are a lot of users on the server (CGI requests sent at the same time). Or not ? (queue system already managed for CGI requests? I'm a bit lost)
So my goal would be to make a queue system that could handle calls from several python scripts => make them wait while it's not their turn => call a function with the right arguments when it's their turn (actually using the modbus line), and send back the response to the python script so it can generate the JSON response.
I really don't know how to achieve that, and I'm sure there are better way to do this.
If I'm not clear enough, don't hesitate to make me aware of it :)
I had the same problem when I had to allow multiple processes to read some Modbus (and not only Modbus) data through a serial port. I ended up with a standalone process (“serial port server”) that exclusively works with a serial port. All other processes work with that port through that standalone process via some inter processes communication mechanism (we used Unix sockets).
This way when an application wants to read a Modbus register it connects to the “serial port server”, sends its request and receives the response. All the actual serial port communication is done by the “serial port server” in sequential way to ensure consistency.
A python program needs to accept a string every second from a serial port. I plan on using a RS-232 to USB converter. The application is going to work under Ubuntu 10.04.
How do I approach this? Do I use pySerial or libusb?
There needs to be done some processing in the meantime, so synchronous communication is not viable. Do I use some kind of interrupts or do I need to open separate threads? Or do I use blocking reads, believing that 1s is enough for my computations (it is plenty ... for now)?
I know, RTFM, but heading in the correct direction from the start will save me a lot of time! Thanks for bearing with me.
If your RS232-USB converter has a driver in Ubuntu that makes it look like a COM port then you will want to use pySerial (the interface is the same as any other COM port). If there is no driver for your device then you might have to use libusb and find the protocol for your specific device. Most major RS232-USB converters these days have usbserial based drivers committed and maintained in the Linux kernel. Simply check with your vendor for this.
There are many ways to do parallel processing but typically I write my applications in two ways:
Have a read thread that does nothing but read and fill a local thread safe buffer so data is ready for other threads when needed.
Have a read thread that reads data, determines where it goes and delivers it via messaging/event processing to the component that needs it.
The decisions here will depend on what your goals are, and how much processing is needed outside the reading.
If this is a stupid question, please don't mind me. But I spent some time trying to find the answer but I couldn't get anything solid. Maybe this is a hardware question, but I figured I'd try here first.
Does Serial Communication only work one to one? The reason this came up is because I had an arduino board listening for communication on its serial port. I had a python script feed bytes to the port as well. However, whenever I opened up the arduino's serial monitor, the connection with the python script failed. The serial monitor also connects to the serial port for communication for its little text input field.
So what's the deal? Does serial communication only work between a single client and a single server? Is there a way to get multiple clients writing to the server? I appreciate your suggestions.
Multiple clients (e.g. Arduinos) communicating with one server (e.g. a desktop computer) is commonly done with the serial variant:
RS-485
This is a simple method widely used in industrial settings where you want to have many devices connected to one computer via one serial port. This type of arrangement is also called multi-drop, because one cable strings around a building with Tees that tap in and drop lines to each device.
The hardware for this is widely available. You can buy USB serial adapters that provide the hardware interface for a computer. Programmatically the port looks just like an RS232 port. For the Arduino you would just add a transceiver chip. A sea of serial transceivers exists, e.g.
Example computer USB adapter with 485 interface
Sample RS485 transceiver chip from Element14
All the devices hang on the same bus listening at the same time. A simple communication protocol used is just add a device address before every command. For example:
001SETLIGHT1 <- tells Arduino "001" to turn on the light
013SETLIGHT0 <- tells "013" to turn off the light
Any device hanging on the cable ignores commands that do not start with their address. When a device responds, it prepends its address.
001SETLIGHT1DONE <- response from device "001" that the command has been received and executed
The address in the response lets the receiving party know which device was talking.
Well, your question can be quite wide, so I'm going to layer my answer:
On the hardware side, the same pair of wires can work be shared with many devices. It is mostly a question of electronics (maintaining the signal in the good voltage range), and not having all devices writing to the serial port at the same time (or you'll get wreckage).
On the software side, on the host, yes you can share the same serial connection to a device with multiple processes. But that's not straight forward. I'll assume you're using an unix (macos or linux):
in unix, everything is a file, your serial connection is one too: /dev/ttyACM0 on linux, for example.
When you have a process opening that file, it will block it (using ioctl, iirc) so no other process can mess with that file too.
Then, you can input and output to that file using the process that opened it, that's all.
But hopefully, it is still possible to share the connection between processes. One of them would simply be to use the tee command, that will be able to get input from one process, and give it back output, and copy the output to another process. You can also do it from within python, by duplicating the file descriptor.
To easily output stuff that can be redirected the unix way (using pipes), you can use socat: http://www.dest-unreach.org/socat/
here's an usage example:
socat -,raw,echo=0,escape=0x0f /dev/ttyACM0,raw,echo=0,crnl
you may want to tweak it for your needs.
Edit:
I forgot about RS-485, which 'jdr5ca' was smart enough to recommend. My explanation below is restricted to RS-232, the more "garden variety" serial port. As 'jdr5ca' points out, RS-485 is a much better alternative for the described problem.
Original:
To expand on zmo's answer a bit, it is possible to share serial at the hardware level, and it has been done before, but it is rarely done in practice.
Likewise, at the software driver level, it is again theoretically possible to share, but you run into similar problems as the hardware level, i.e. how to "share" the link to prevent collisions, etc.
A "typical" setup would be two serial (hardware) devices attached to each other 1:1. Each would run a single software process that would manage sending/receiving data on the link.
If it is desired to share the serial link amongst multiple processes (on either side), the software process that manages the link would also need to manage passing the received data to each reading process (keeping track of which data each process had read) and also arbitrate which sending process gets access to the link during "writes".
If there are multiple read/write processes on each end of the link, the handshaking/coordination of all this gets deep as some sort of meta-signaling arrangement may be needed to coordinate the comms between the process on each end.
Either a real mess or a fun challenge, depending on your needs and how you view such things.
Short version of my question:
How do I design a single Python script that can listen and respond to inputs received via HTTP or a serial port, and also initiate communications via these channels on its own? My problem is that I don't understand how to design a single script that both (i) uses a web framework to listen on some port for HTTP inputs, and (ii) also does other work that's independent of incoming HTTP requests.
Long version:
I want to use Python to design a system that does the following:
Listens to a serial port for occasional reports. Specifically, I have a network of JeeNode sensors (wireless Arduino-compatible modules) that talk to a central JeeLink, which connects to my computer via USB and talks to my Python script via pySerial.
Listens to a web URL for occasional inputs. Specifically, users send commands to the system via SMS to a Twilio number. Twilio intercepts the SMS messages and posts them to a URL I designate, and I use the Bottle micro web-framework to listen for new HTTP requests.
Responds to both types (serial and HTTP) of inputs. For example, if a user texts the command "Sleep", I want to (i) tell the sensors to go to sleep via the serial port -> JeeLink (which will then forward the command onto the remotes); and (ii) reply to the sender -- and maybe other users -- that the command has been received and is being executed.
Occasionally initiates its own communications to users (via HTTP -> Twilio -> SMS) or remote sensors (via serial -> JeeLink) without any precipitating input event. Two examples: (1) I want to report out to users or remote sensors every N minutes even if I haven't received any new inputs. (2) I want to tell users remotes have actually entered Sleep mode. Because the remotes are battery-powered, they spend most of the time in an inaccessible low-power mode. They can only receive new commands from the JeeLink when they initiate a wireless "check-in" every 5 min. So while technically remotes go to sleep (or wake up, etc.) in response to a user command, commands and responses are effectively independent.
My problem is that all of usage examples of web frameworks I've seen seem to assume that all precipitating events occur via HTTP requests. I can create a Bottle object, and use decorators to bind code to that object that get executed whenever it sees an HTTP request that matches some specified URL path. But I don't know how to do that while simultaneously doing other work that's independent of HTTP events, for example, listening to the serial port.
After struggling a lot, the potential solutions I'm considering now are:
Splitting the functionality into separate scripts. A.py listens for text messages via HTTP and writes the relevant information to some database; B.py continuously reads the database for new records and reacts accordingly, as well as listening to the serial monitor and doing other work. This seems like it would work fine, but it feels inelegant, and I suspect there's a simpler solution I'm unaware of.
Maybe the answer is related to Python decorators? I use various decorators to specify the URL paths that, when a matching HTTP request comes in, execute the code bound to the decorator. So I'm guessing that maybe there's a way to specify some other kind of decorator that, rather than listening for HTTP requests, gets executed when my "main" Python code tells it to? But I don't know enough about decorators to know if this is true.
It seems like you are trying to write an asynchronous application to manage your network of nodes via HTTP. You want to respond to incoming communications on multiple channels as they occur, you want to initiate communications on a schedule, on multiple channels, and you want those two forms of communication to interact. All of these communications are with an outside world that is slow, so it behooves you not to block if you don't need to.
It will probably be easiest to maintain your system if you organize your code into several Python modules, split by their area of concern - serial interface code, HTTP interface code, common processing code-paths, etc. Weave those components together in a central control module, which imports your libraries, and knows how to start and stop cleanly. Then you can test the serial interface independent of the web interface, and potentially reuse some of those Python modules in other projects.