I am trying to write a program in PyDAQmx that counts digital edges and outputs a TTL signal every nth edge. I am having trouble setting the Acquisition Mode in PyDAQmx to be "1 Sample (On Demand)" which is what I set while using LabVIEW. I am using a NI USB6210 DAQ Device.
This is my first time coding with NIDAQ/PyDAQMX/etc so I based this on an example on the PyDAQmx page that shows how to translate a C program into Python, the relevant piece of code looks like this:
read = int32()
data = numpy.zeros((1000,), dtype=numpy.uint32)
try:
DAQmxCreateTask("",byref(taskHandle))
DAQmxCreateCICountEdgesChan(taskHandle,"Dev6/ctr0","",DAQmx_Val_Rising,0,DAQmx_Val_CountUp)
#Somehow set acquisition mode here
DAQmxStartTask(taskHandle)
while True:
DAQmxReadCounterScalarU32 (taskHandle, 1000, None, read)
print "Acquired %d samples"%read.value
print "result is %s " %result
My expectation is that this is the default timing mode for counter input tasks, and you can confirm that by asking the driver via the DAQmx C API's Sample Timing Type parameter:
DAQmxCreateTask("",byref(taskHandle))
DAQmxCreateCICountEdgesChan(taskHandle,"Dev6/ctr0","",DAQmx_Val_Rising,0,DAQmx_Val_CountUp)
timingType = int32()
DAQmxGetSampTimingType(taskHandle, byref(timingType))
print(timingType)
If timingType has the value 10390, then you have on-demand sampling.
In general, if there isn't a function that does what you want (in this case, there isn't a DAQmxCfgOnDemandTiming() function), you can assume that is the default configuration. In addition, the DAQmx functions don't expose all of the device's settings, however, and so for very specialized behavior, you must get and set the properties you require explicitly.
Related
Using native Python code in SQL UDFs in Monetdb is really powerful. BUT, debugging such UDFs could benefit from more support. In particular, if I use the old-fashioned print('debugging info') it disappears in the big black void.
create function dummy()
returns string
language python{
print('Entering the dummy UDF')
return 'hello';
};
How to retrieve this information from the server or MonetDB client.
I was debugging some Python UDF last week :)
Step 1: first make sure your Python code at least works in a Python interpreter.
Step 2: in a Python UDF, write your debugging info. to a file, e.g.:
f = open('/tmp/debug.out', 'w')
f.write('my debugging info\n')
f.close()
This isn't ideal, but it works. Also, I used this to export the parameter values of my Python UDF. In this way, I can run the body of my Python UDF in a Python interpreter with the exact data I receive from MonetDB.
In case someone is still interested in this problem.
There are two novel ways of debugging MonetDB's Python/UDFs.
1) Using the python client pymonetdb (https://github.com/gijzelaerr/pymonetdb).
You can install it throw pip
pip install numpy
To use it, think of the following setting with a table that holds an integer and a UDF that computes the mean absolute deviation of a given column.
CREATE TABLE integers(i INTEGER);
INSERT INTO integers VALUES (1), (3), (6), (8), (10);
CREATE OR REPLACE FUNCTION mean_deviation(column INTEGER)
RETURNS DOUBLE LANGUAGE PYTHON {
mean = 0.0
for i in range (0, len(column)):
mean += column[I]
mean = mean / len(column)
distance = 0.0
for i in range (0, len(column)):
distance += column[i] - mean
deviation = distance/len(column)
return deviation;
};
To debug your function using terminal debugging (i.e., pdb) you just need to open a database connection using pymonetdb.connect(), later you get a cursor object from the connection, and through the cursor object you call the debug() function, sending as parameters the SQL you want to examine and the UDF name you wish to debug.
import pymonetdb
conn = pymonetdb.connect(database='demo') #Open Database connection
c = conn.cursor()
sql = 'select mean_deviation(i) from integers;'
c.debug(sql, 'mean_deviation') #Console Debugging
There is an optional sampling step that only transfers a uniform random sample of the data instead of the full input data set. If you wish to sample you just need to send the number of elements you wish to get from the sampling (e.g., c.debug(sql, 'mean_deviation', 10) in case you desire the subset of 10 elements)
2) Using a POC plugin for PyCharm called devudf, which you can install throw the plugin page of pycharm, or by directly going to the JetBrains page: https://plugins.jetbrains.com/plugin/12063-devudf. It adds an option to the main menu called "UDF Development" and allows for you do directly import and export UDFs from your database directly to pycharm, and enjoy the IDE's debugging capabilities.
I use a Raspberry Pi to collect sensor data and set digital outputs, to make it easy for other applications to set and get values I'm using a socket server. But I am having some problems finding an elegant way of making all the data available on the socket server without having to write a function for each data type.
Some examples of values and methods I have that I would like to make available on the socket server:
do[2].set_low() # set digital output 2 low
do[2].value=0 # set digital output 2 low
do[2].toggle() # toggle digital output 2
di[0].value # read value for digital input 0
ai[0].value # read value for analog input 0
ai[0].average # get the average calculated value for analog input 0
ao[4].value=255 # set analog output 4 to byte value 255
ao[4].percent=100 # set analog output 4 to 100%
I've tried eval() and exec():
self.request.sendall(str.encode(str(eval('item.' + recv_string)) + '\n'))
eval() works unless I am using equal sign (=), but I'm not to happy about the solution because of dangers involved. exec() does the work but does not return any value, also dangerous.
I've also tried getattr():
recv_string = bytes.decode(self.data).lower().split(';')
values = getattr(item, recv_string[0])
self.request.sendall(str.encode(str(values[int(recv_string[1])].value) + '\n'))
^^^^^
This works for getting my attributes, and the above example works for getting the value of the attribute I am getting with getattr(). But I can not figure out how to use getattr() on the value attribute as well.
The semi-colon (;) is used to split the incoming command, I've experimented with multiple ways of formatting the commands:
# unit means that I want to talk to a I/O interface module,
# and the name specified which one
unit;unit_name;get;do;1
unit;unit_name;get;do[1]
unit;unit_name;do[1].value
I am free to choose the format since I am also writing the software that uses these commands. I have not yet found a good format which covers all my needs.
Any suggestions how I can write an elegant way of accessing and returning the data above? Preferably with having to add new methods to the socket server every time a new value type or method is added to my I/O ports.
Edit: This is not public, it's only available on my LAN.
Suggestions
Make your API all methods so that eval can always be used:
def value_m(self, newValue=None):
if newValue is not None:
self.value = newValue
return self.value
Then you can always do
result = str(eval(message))
self.request.sendall(str.encode(result + '\n'))
For your message, I would suggest that your messages are formatted to include the exact syntax of the command exactly so that it can be evaled as-is, e.g.
message = 'do[1].value_m()' # read a value, alternatively...
message = 'do[1].value_m(None)'
or to write
message = 'do[1].value_m(0)' # write a value
This will make it easy to keep your messages up-to-date with your API, because they must match exactly, you won't have a second DSL to deal with. You really don't want to have to maintain a second API, on top of your IO one.
This is a very simple scheme, suitable for a home project. I would suggest some error handling in evaluation, like so:
import traceback
try:
result = str(eval(message))
except Exception:
result = traceback.format_exc()
self.request.sendall(str.encode(result + '\n'))
This way your caller will receive a printout of the exception traceback in the returned message. This will make it much, much easier to debug bad calls.
NOTE If this is public-facing, you cannot do this. All input must be sanitised. You will have to parse each instruction and compare it to the list of available (and desirable) commands, and verify input validity and validity ranges for everything. For such a scenario you are better off simply using one of the input validation systems used for web services, where this problem receives a great deal of attention.
I wrote some python code to control a number of USB (electrical relays and temperature sensors) and RS232 (vacuum gauges) devices. From within this main script (e.g., myscript.py), I would like to import a module (e.g., exp_protocols.py) where I define different experimental protocols, i.e. a series of instructions to open or close relays, read temperature and pressure values, with some simple flow control thrown in (e.g. "wait until temperature exceeds 200 degrees C").
My initial attempt looked like this:
switch_A = Relay('A')
switch_B = Relay('B')
gauge_1 = Gauge('1')
global switch_A
global switch_B
global gauge_1
from exp_protocols import my_protocol
my_protocol()
with exp_protocols.py looking like this:
def my_protocol():
print 'Pressure is %.3f mbar.' % gauge_1.value
switch_A.close()
switch_B.open()
This outputs a global variable error, because exp_protocols.my_protocol cannot access the objects defined in myscript.py.
It seems, from reading the answers to earlier questions here, that I could (should?) create all my Relay and Gauge variable in another module, e.g., myconfig.py, and then import myconfig both in myscript.py and exp_protocols? But if I do that, won't my Relay and Gauge objects be created twice (thus trying to open serial ports already active, etc.)?
What would be the best (most Pythonic) way to achieve this kind of inter-module communication?
Thanks in advance.
No matter how many times you import myconfig, python only imports the module once. After the first import, future import statements just grab another reference to the module.
Globals should only be used if these are static bits of data. Your function would be more generic if it took the variables as parameters:
def my_protocol(switch_A, switch_B, gauge_1):
print 'Pressure is %.3f mbar.' % gauge_1.value
switch_A.close()
switch_B.open()
modules could use it with many combinations of data. Suppose you have blocks of switches in a list (and I'm just making this up because I have no idea how you configure your data...), you could process them all with the same function:
import exp_protocols
switch_blocks = [
[Relay('1-A'), Relay('1-B'), Gauge('1-1')],
[Relay('2-A'), Relay('2-B'), Gauge('2-1')],
]
for switch1, switch2, gauge in switch_blocks:
exp_protocols.my_protocol(switch1, switch2, gauge)
I developed for the example a simple Modelica model based on the fluid library of the MSL. I connected a MassFlowSource with a pipe and a Boundary_PT as sink function as in the picture below:
http://www.casimages.com/img.php?i=14061806120359130.png
I generate a FMU package with OpenModelica (in mode model-exchange).
I manage this FMU package with python with the code below:
import pyfmi, os
from pyfmi import load_fmu
myModel = load_fmu('PathToFolder\\test3.fmu')
res1 = myModel.simulate() # First simulation with m_flow in source set to [1] Kg/s
x = myModel.get('boundary1.m_flow') # Mass flow rate of the source
y = myModel.get('pipe.port_a.m_flow') # Mass flow rate in pipe
print x, y
myModel.set('boundary1.m_flow', 2)
option = myModel.simulate_options()
option['initialize'] = False # Need to initialize the simulation
res2 = myModel.simulate(options = option) # Second simulation with m_flow in source set to [2] Kg/s
x = myModel.get('boundary1.m_flow') # Mass flow rate of the source
y = myModel.get('pipe.port_a.m_flow') # Mass flow rate in pipe
print x, y
os.system('pause')
The objective is to show a problem when you change a parameter in the model, here the "m_flow" variable in source component. This new set to "2" should change the "m_flow" in pipe but it does not.
Results: In the first simulation the both "m_flow" are gotten to "1" and it's normal because the model is set like this. In the second simulation, I set the parameter to "2" in the source but the pipe "m_flow" stay to "1" (It should be "2").
http://www.casimages.com/img.php?i=140618060905759619.png
The model of the fluid source in Modelica is this one (only our interesting part):
equation
if not use_m_flow_in then
m_flow_in_internal = m_flow;
end if;
connect(m_flow_in, m_flow_in_internal);
I think the FMU don't consider parameter when they are in a if-condition. For me it's a problem because I need to manage FMU and to be sure that if I set a parameter, the simulation will use this new set. How be sure that FMU/FMI works well? Where is the exhaustive list with the type of parameters we can't manage in FMU?
I already know that parameters which change the number of equations can't be consider in FMU management (idem for variables which change the index of DAEs).
Note that OpenModelica has a concept of structural parameters and the Evaluate=true annotation. For example, if a parameter is used as an array dimension, it might be evaluated to an Integer value. All uses of that parameter will use the evaluated value, as if it was a constant.
Rather than including a picture of the diagram, the Modelica source code would have been easier to look at in order to find out what OpenModelica did to the system.
I suspect a parameter was evaluated. If you generate non-FMU code, you could inspect the modelName_init.xml generated by OpenModelica and find the entry for a parameter and look for the property isValueChangeable.
You could also use OMEdit to debug the system and view the initial equation (generate the executable including debug information). File->Open Transformations File, then select the modelName_info.xml file. Search for the variable you tried to change and go to the initial equation that defined it. It could very well be that a start-value (set by PyFMI) is ignored because it is not needed to produce a solution.
whenever you try to set new values to the parameter,
Follow these steps:
1.Reset the model
2.set new values for the parameter
3.Simulate the model.
I am not familiar with PyFMI, but I kinda encountered the same situation before. You could try a few things below.
Try to terminate/free the instant after your first sim.
As most parameters could not be changed after init, you could make that parameter as an input connector, so that this specific parameter could be changed at any time.
(In FMU from Dymola) I also found that if that parameter involves in your initial nonlinear system of equation, then you will get an error "the model could not be initialized" if you try to init the model on the same instant.
I am currently writing a test for validating some error-correcting code:
inputData1 = "1001011011"
inputData2 = "1001111011"
fingerPrint1 = parityCheck.getParityFingerprint(inputData1)
# Expected: fingerPrint1=0
fingerPrint2 = parityCheck.getParityFingerprint(inputData2)
# Expected: fingerPrint2=1
if fingerPrint1 == fingerPrint2:
print "Test failed: errorCorrectingAlgo1 failed to detect error"
else:
print "Test success: errorCorrectingAlgo1 successfully detected error"
Is there a python class I can use to automatically generate error(burst error, single event, reordering, etc) on a binary string? Eg:
inputData1 = "1001011011"
inputData1BurstError = applyBurstError(inputData1) # Eg: inputData1BurstError =
("1011111011", or "1001000000", or etc.)
inputData1RandomError = applyRandomError(inputData1)# Eg: inputData1RandomError =
("0001101011", or "0111101101", or etc.)
inputData1Reordering = applyReordering(inputData1) # Eg: inputData1Reordering =
("0101110011", or "1101101001", or etc.)
inputData1SingleEvent = applySingleEvent(inputData1)# Eg: inputData1SingleEvent =
("1001011011", or "1000011011", or etc.)
I know that such a class could be easily implementable for binary check validation. However, I need a more complete class to test more complex error detecting code such as CRC. I have already used Netem (http://www.linuxfoundation.org/collaborate/workgroups/networking/netem) in the past to modify packets entering and leaving interfaces in a telecom lab. However, I doubt Netem would be a good solution to my problem this time as my whole test is planned to be run on my desktop computer only. Also, I am working on Windows 7 this time. Moreover, Netem does not provide a complete/complex enough set of functions for my test implementation.
Any help/suggestion would be greatly appreciated.
Thanks!
Related question: How to shuffle a list with Gaussian distribution