I am attempting to implement the slowly updating global window side inputs example from the documentation from java into python and I am kinda stuck on what the AfterProcessingTime.pastFirstElementInPane() equivalent in python. For the map I've done something like this:
class ApiKeys(beam.DoFn):
def process(self, elm) -> Iterable[Dict[str, str]]:
yield TimestampedValue(
{"<api_key_1>": "<account_id_1>", "<api_key_2>": "<account_id_2>",},
elm,
)
map = beam.pvalue.AsSingleton(
p
| "trigger pipeline" >> beam.Create([None])
| "define schedule"
>> beam.Map(
lambda _: (
0, # would be timestamp.Timestamp.now() in production
20, # would be timestamp.MAX_TIMESTAMP in production
1, # would be around 1 hour or so in production
)
)
| "GenSequence"
>> PeriodicSequence()
| "ApplyWindowing"
>> beam.WindowInto(
beam.window.GlobalWindows(),
trigger=Repeatedly(Always(), AfterProcessingTime(???)),
accumulation_mode=AccumulationMode.DISCARDING,
)
| "api_keys" >> beam.ParDo(ApiKeys())
)
I am hoping to use this as a Dict[str, str] input to a downstream function that will have windows of 60 seconds, merging with this one that I hope to update on an hourly basis.
The point is to run this on google cloud dataflow (where we currently just re-release it to update the api_keys).
I've pasted the java example from the documentation below for convenience sake:
public static void sideInputPatterns() {
// This pipeline uses View.asSingleton for a placeholder external service.
// Run in debug mode to see the output.
Pipeline p = Pipeline.create();
// Create a side input that updates each second.
PCollectionView<Map<String, String>> map =
p.apply(GenerateSequence.from(0).withRate(1, Duration.standardSeconds(5L)))
.apply(
Window.<Long>into(new GlobalWindows())
.triggering(Repeatedly.forever(AfterProcessingTime.pastFirstElementInPane()))
.discardingFiredPanes())
.apply(
ParDo.of(
new DoFn<Long, Map<String, String>>() {
#ProcessElement
public void process(
#Element Long input, OutputReceiver<Map<String, String>> o) {
// Replace map with test data from the placeholder external service.
// Add external reads here.
o.output(PlaceholderExternalService.readTestData());
}
}))
.apply(View.asSingleton());
// Consume side input. GenerateSequence generates test data.
// Use a real source (like PubSubIO or KafkaIO) in production.
p.apply(GenerateSequence.from(0).withRate(1, Duration.standardSeconds(1L)))
.apply(Window.into(FixedWindows.of(Duration.standardSeconds(1))))
.apply(Sum.longsGlobally().withoutDefaults())
.apply(
ParDo.of(
new DoFn<Long, KV<Long, Long>>() {
#ProcessElement
public void process(ProcessContext c) {
Map<String, String> keyMap = c.sideInput(map);
c.outputWithTimestamp(KV.of(1L, c.element()), Instant.now());
LOG.debug(
"Value is {}, key A is {}, and key B is {}.",
c.element(),
keyMap.get("Key_A"),
keyMap.get("Key_B"));
}
})
.withSideInputs(map));
}
/** Placeholder class that represents an external service generating test data. */
public static class PlaceholderExternalService {
public static Map<String, String> readTestData() {
Map<String, String> map = new HashMap<>();
Instant now = Instant.now();
DateTimeFormatter dtf = DateTimeFormat.forPattern("HH:MM:SS");
map.put("Key_A", now.minus(Duration.standardSeconds(30)).toString(dtf));
map.put("Key_B", now.minus(Duration.standardSeconds(30)).toString());
return map;
}
}
Any ideas as to how to emulate this example would be enormously appreciated, I've spent literally days on this issue now :(
Update #2 based on #AlexanderMoraes
So, I've tried changing it according to my understanding of your suggestions:
main_window_size = 5
trigger_interval = 30
side_input = beam.pvalue.AsSingleton(
p
| "trigger pipeline" >> beam.Create([None])
| "define schedule"
>> beam.Map(
lambda _: (
0, # timestamp.Timestamp.now().__float__(),
60, # timestamp.Timestamp.now().__float__() + 30.0,
trigger_interval, # fire_interval
)
)
| "GenSequence" >> PeriodicSequence()
| "api_keys" >> beam.ParDo(ApiKeys())
| "window"
>> beam.WindowInto(
beam.window.GlobalWindows(),
trigger=Repeatedly(AfterProcessingTime(window_size)),
accumulation_mode=AccumulationMode.DISCARDING,
)
)
But when combining this with another pipeline with windowing set to something smaller than trigger_interval I am unable to use the dictionary as a singleton because for some reason they are duplicated:
ValueError: PCollection of size 2 with more than one element accessed as a singleton view. First two elements encountered are "{'<api_key_1>': '<account_id_1>', '<api_key_2>': '<account_id_2>'}", "{'<api_key_1>': '<account_id_1>', '<api_key_2>': '<account_id_2>'}". [while running 'Pair with AccountIDs']
Is there some way to clarify that the singleton output should ignore whatever came before it?
The title of the question "slowly updating side inputs" refers to the documentation, which already has a Python version of the code. However, the code you provided is from "updating global window side inputs", which just has the Java version for the code. So I will be addressing an answer for the second one.
You are not able to reproduce the AfterProcessingTime.pastFirstElementInPane() within Python. This function is used to fire triggers, which determine when to emit results of each window (refered as pane). In your case, this particular call AfterProcessingTime.pastFirstElementInPane() creates a trigger that fires when the current processing time passes the processing time at which this trigger saw the first element in a pane, here. In Python this is achieve using AfterWatermark and AfterProcessingTime().
Below, there are two pieces of code one in Java and another one in Python. Thus, you can understand more about each one's usage. Both examples set a time-based trigger which emits results one minute after the first element of the window has been processed. Also, the accumulation mode is set for not accumulating the results (Java: discardingFiredPanes() and Python: accumulation_mode=AccumulationMode.DISCARDING).
1- Java:
PCollection<String> pc = ...;
pc.apply(Window.<String>into(FixedWindows.of(1, TimeUnit.MINUTES))
.triggering(AfterProcessingTime.pastFirstElementInPane()
.plusDelayOf(Duration.standardMinutes(1)))
.discardingFiredPanes());
2- Python: the trigger configuration is the same as described in point 1
pcollection | WindowInto(
FixedWindows(1 * 60),
trigger=AfterProcessingTime(1 * 60),
accumulation_mode=AccumulationMode.DISCARDING)
The examples above were taken from thew documentation.
AsIter() insted of AsSingleton() worked for me
Related
I'm trying to count kafka message key, by using direct runner.
If I put max_num_records =20 in ReadFromKafka, I can see results printed or outputed to text.
like:
('2102', 5)
('2706', 5)
('2103', 5)
('2707', 5)
But without max_num_records, or if max_num_records is larger than message count in kafka topic, the program keeps running but nothing is outputed.
If I try to output with beam.io.WriteToText, there will be an empty temp folder created, like:
beam-temp-StatOut-d16768eadec511eb8bd897b012f36e97
Terminal shows:
2.30.0: Pulling from apache/beam_java8_sdk
Digest: sha256:720144b98d9cb2bcb21c2c0741d693b2ed54f85181dbd9963ba0aa9653072c19
Status: Image is up to date for apache/beam_java8_sdk:2.30.0
docker.io/apache/beam_java8_sdk:2.30.0
If I put 'enable.auto.commit': 'true' in kafka consumer config, the messages are commited, other clients from the same group can't read them, so I assume it's reading succesfully, just not processing or outputing.
I tried Fixed-time, Sliding time windowing, with or without different trigger, nothing changes.
Tried flink runner, got same result as direct runner.
No idea what I did wrong, any help?
environment:
centos 7
anaconda
python 3.8.8
java 1.8.0_292
beam 2.30
code as below:
direct_options = PipelineOptions([
"--runner=DirectRunner",
"--environment_type=LOOPBACK",
"--streaming",
])
direct_options.view_as(SetupOptions).save_main_session = True
direct_options.view_as(StandardOptions).streaming = True
conf = {'bootstrap.servers': '192.168.75.158:9092',
'group.id': "g17",
'enable.auto.commit': 'false',
'auto.offset.reset': 'earliest'}
if __name__ == '__main__':
with beam.Pipeline(options = direct_options) as p:
msg_kv_bytes = ( p
| 'ReadKafka' >> ReadFromKafka(consumer_config=conf,topics=['LaneIn']))
messages = msg_kv_bytes | 'Decode' >> beam.MapTuple(lambda k, v: (k.decode('utf-8'), v.decode('utf-8')))
counts = (
messages
| beam.WindowInto(
window.FixedWindows(10),
trigger = AfterCount(1),#AfterCount(4),#AfterProcessingTime
# allowed_lateness=3,
accumulation_mode = AccumulationMode.ACCUMULATING) #ACCUMULATING #DISCARDING
# | 'Windowsing' >> beam.WindowInto(window.FixedWindows(10, 5))
| 'TakeKeyPairWithOne' >> beam.MapTuple(lambda k, v: (k, 1))
| 'Grouping' >> beam.GroupByKey()
| 'Sum' >> beam.MapTuple(lambda k, v: (k, sum(v)))
)
output = (
counts
| 'Print' >> beam.ParDo(print)
# | 'WriteText' >> beam.io.WriteToText('/home/StatOut',file_name_suffix='.txt')
)
There are couple of known issues that you might be running into.
Beam's portable DirectRunner currently does not fully support streaming. Relevant Jira to follow is https://issues.apache.org/jira/browse/BEAM-7514
Beam's portable runners (including DirectRunner) has a known issue where streaming sources do not properly emit messages. Hence max_num_records or max_read_time arguments have to be provided to convert such sources to bounded sources. Relevant Jira to follow is https://issues.apache.org/jira/browse/BEAM-11998.
Hi every one I try to use BAC0 package in python 3 to get value of multiple point in bacnet network.
I user something like following:
bacnet = BAC0.lite(ip=x.x.x.x)
tmp_points = bacnet.readRange("11:2 analogInput 0 presentValue");
and it seems not OK :(
error is:
BAC0.core.io.IOExceptions.NoResponseFromController: APDU Abort Reason : unrecognizedService
And in document I just can find
def readRange(
self,
args,
range_params=None,
arr_index=None,
vendor_id=0,
bacoid=None,
timeout=10,
):
"""
Build a ReadProperty request, wait for the answer and return the value
:param args: String with <addr> <type> <inst> <prop> [ <indx> ]
:returns: data read from device (str representing data like 10 or True)
*Example*::
import BAC0
myIPAddr = '192.168.1.10/24'
bacnet = BAC0.connect(ip = myIPAddr)
bacnet.read('2:5 analogInput 1 presentValue')
Requests the controller at (Network 2, address 5) for the presentValue of
its analog input 1 (AI:1).
"""
To read multiple properties from a device object, you must use readMultiple.
readRange will read from a property acting like an array (ex. TrendLogs objects implements records as an array, we use readRange to read them using chunks of records).
Details on how to use readMultiple can be found here : https://bac0.readthedocs.io/en/latest/read.html#read-multiple
A simple example would be
bacnet = BAC0.lite()
tmp_points = bacnet.readMultiple("11:2 analogInput 0 presentValue description")
In several problems of the VRP, when informing the total number of vehicles, it is compromised that all are used and that at least they visit a node. In reality, this may not even be the best, but I would like to understand why and how to adapt according to needs.
The example below concerns the simple VRP example according to OR Tools, with a small edition in the distance matrix and some changes according to the website (blog) - https://activimetrics.com/blog/ortools/counting_dimension /. According to the latter, it is possible to carry out a fair distribution of routes, which seemed totally appealing, since, as a rule, the solver minimizes the longest route and ends up using fewer vehicles and assigning several nodes to it. An important need was the use of an approach that makes the vehicle act, ensuring that it is used at least once.
However, if 5 vehicles are used to solve the problem, by logic and result obtained, he gets there, places a node for each vehicle, which without this edition was not possible. The problem is that using only 4 vehicles, the solver is no longer there, it manages to distribute routes, but always leaves a vehicle out.
using System;
using System.Collections.Generic;
using Google.OrTools.ConstraintSolver;
public class VrpGlobalSpan
{
class DataModel
{
public long[,] DistanceMatrix = {
{0, 9777, 10050,7908,10867,16601},
{9777, 0, 4763, 4855, 19567,31500},
{10050, 4763,0,2622,11733,35989},
{7908,4855,2622,0,10966,27877},
{10867,19567,11733,10966,0,27795},
{16601,31500,35989,27877,27795,0},
};
public int VehicleNumber = 4;
public int Depot = 0;
};
/// <summary>
/// Print the solution.
/// </summary>
static void PrintSolution(
in DataModel data,
in RoutingModel routing,
in RoutingIndexManager manager,
in Assignment solution)
{
// Inspect solution.
long maxRouteDistance = 0;
for (int i = 0; i < data.VehicleNumber; ++i)
{
Console.WriteLine("Route for Vehicle {0}:", i);
long routeDistance = 0;
var index = routing.Start(i);
while (routing.IsEnd(index) == false)
{
Console.Write("{0} -> ", manager.IndexToNode((int)index));
var previousIndex = index;
index = solution.Value(routing.NextVar(index));
routeDistance += routing.GetArcCostForVehicle(previousIndex, index, 0);
}
Console.WriteLine("{0}", manager.IndexToNode((int)index));
Console.WriteLine("Distance of the route: {0}m", routeDistance);
maxRouteDistance = Math.Max(routeDistance, maxRouteDistance);
}
Console.WriteLine("Maximum distance of the routes: {0}m", maxRouteDistance);
}
public static void Main(String[] args)
{
// Instantiate the data problem.
DataModel data = new DataModel();
// Create Routing Index Manager
RoutingIndexManager manager = new RoutingIndexManager(
data.DistanceMatrix.GetLength(0),
data.VehicleNumber,
data.Depot);
// Create Routing Model.
RoutingModel routing = new RoutingModel(manager);
// Create and register a transit callback.
int transitCallbackIndex = routing.RegisterTransitCallback(
(long fromIndex, long toIndex) => {
// Convert from routing variable Index to distance matrix NodeIndex.
var fromNode = manager.IndexToNode(fromIndex);
var toNode = manager.IndexToNode(toIndex);
return data.DistanceMatrix[fromNode, toNode];
}
);
// Define cost of each arc.
routing.SetArcCostEvaluatorOfAllVehicles(transitCallbackIndex);
double answer = 5/data.VehicleNumber +1;
//double Math.Ceiling(answer);
//double floor = (int)Math.Ceiling(answer);
routing.AddConstantDimension(
1,
(int)Math.Ceiling(answer),
true, // start cumul to zero
"Distance") ;
RoutingDimension distanceDimension = routing.GetDimensionOrDie("Distance");
//distanceDimension.SetGlobalSpanCostCoefficient(100);
for (int i = 0; i < data.VehicleNumber; ++i)
{
distanceDimension.SetCumulVarSoftLowerBound(routing.End(i), 2, 1000000);
}
// Setting first solution heuristic.
RoutingSearchParameters searchParameters =
operations_research_constraint_solver.DefaultRoutingSearchParameters();
//searchParameters.FirstSolutionStrategy =
// FirstSolutionStrategy.Types.Value.PathCheapestArc;
searchParameters.TimeLimit = new Google.Protobuf.WellKnownTypes.Duration { Seconds = 5 };
searchParameters.LocalSearchMetaheuristic = LocalSearchMetaheuristic.Types.Value.Automatic;
// Solve the problem.
Assignment solution = routing.SolveWithParameters(searchParameters);
// Print solution on console.
PrintSolution(data, routing, manager, solution);
}
}
Perhaps it must have been a topic already discussed, but I wanted to understand and understand what is the best path to follow and what measures to take to transform this example and others to follow in a better approach.
I thank you in advance for your attention and I am waiting for feedback.
Thank you.
I'm wondering if there is an easy to to initialize BPF maps from python userspace. For my project, I'll have a scary looking NxN 2d array of floats for each process. For simplicity's sake, lets assume N is constant across processes (say 5). To achieve kernel support for this data, I could do something like:
b = BPF(text = """
typedef struct
{
float transMat[5][5];
} trans_struct;
BPF_HASH(trans_mapping, char[16], trans_struct);
.....
""")
I'm wondering if theres an easy way to initialize this map from python. Something like:
for ele in someDictionary:
#asume someDitionary has mapping (comm -> 5x5 float matrix)
b["trans_mapping"].insert(ele, someDictionary[ele])
I suppose at the crux of my confusion is -- 1) are all map methods available to the user, 2) how do we ensure type consistenty when going from python objects to c structures
Solution based on pchaigno's comment -- The key things to note are the use of c_types to ensure type consistency across environments, and extracting the table by indexing the BPF program object. Due to our ability to get maps by indexing, the get_table() function is now considered out of date. This format provides the structure of loading data into the map from the python front-end, but doesn't completely conform with the specifics of my question.
from time import sleep, strftime
from bcc import BPF
from bcc.utils import printb
from bcc.syscall import syscall_name, syscalls
from ctypes import *
b = BPF(text = """
BPF_HASH(start, u32, u64);
TRACEPOINT_PROBE(raw_syscalls, sys_exit)
{
u32 syscall_id = args->id;
u32 key = 1;
u64 *val;
u32 uid = bpf_get_current_uid_gid();
if (uid == 0)
{
val = start.lookup(&key); //find value associated with key 1
if (val)
bpf_trace_printk("Hello world, I have value %d!\\n", *val);
}
return 0;
}
""")
thisStart = b["start"]
thisStart[c_int(1)] = c_int(9) #insert key-value part 1->9
while 1:
try:
(task, pid, cpu, flags, ts, msg) = b.trace_fields()
except KeyboardInterrupt:
print("Detaching")
exit()
print("%-18.9f %-16s %-6d %s" % (ts, task, pid, msg))
I am new to the perl and python. As a part of my job currently I am asked to convert the perl script to python. The purpose of this script is to automate the task of the magnum tester and parametric analyzer. Is any of you able to understand what the get_gpib_status function trying to do ??. Specific questions are
What does if(/Error/) mean in perl ?
What does it mean by
chomp;
s/\+//g;
$value = $_;
$foundError = 0;
in perl?. What is the python equivalent of the get_gpib_status function ??.
Any kind of help is highly appreciated. Thanks in advance. The Script is as shown below.
BEGIN {unshift(#INC, "." , ".." ,
"\\Micron\\Nextest\\perl_modules");}
use runcli;
# Enable input from perl script as nextest cli command, Runcli is the
command
that you’ll use to communicate with the tester
use getHost;
# To latch in module/ "library into current script, here the
getHost.pm
is
loaded, used once on nextest system
#open FILE,">","iv.txt" or die $!;
# Make file ready for reading from FILE
$k=148;
# Time period T = 38ns corresponds to data value of 140
#$i=0;
while($k<156)
{
$i=3;
while($i<4)
{
#$logfile = "vpasscp8stg_SS_vppmode"."stg"."$i"."freq"."$k".".txt";
# Give the name to the logfile
#open(LOG,">$logfile") or die $!;
# Makes the file ready for reading from LOG
#******************* SUBS ****************************
if($i==3)
{
runcli("gpibinit;");
runcli("gpibsend(0x1,\":PAGE:CHAN:MODE SWEEP\")");
# PAGE, CHANnel, MODE, Set the mode to sweep
runcli("gpibsend(0x1,\":PAGE:CHAN:SMU1:VNAME 'Vout'\")");
# Source Monitor Unit, voltage name Vout
runcli("gpibsend(0x1,\":PAGE:CHAN:SMU1:INAME 'Iout'\")");
# current name Iout
runcli("gpibsend(0x1,\":PAGE:CHAN:SMU1:MODE V\")");
# voltage output node
runcli("gpibsend(0x1,\":PAGE:CHAN:SMU1:FUNCTION VAR1\")");
# function Variable
runcli("gpibsend(0x1,\":PAGE:CHAN:SMU2:VNAME 'Vcc'\")");
runcli("gpibsend(0x1,\":PAGE:CHAN:SMU2:INAME 'Icc'\")");
runcli("gpibsend(0x1,\":PAGE:CHAN:SMU2:MODE V\")");
runcli("gpibsend(0x1,\":PAGE:CHAN:SMU2:FUNCTION CONSTANT\")");
# function constant
runcli("gpibsend(0x1,\":PAGE:CHAN:SMU3:VNAME 'Vpp'\")");
runcli("gpibsend(0x1,\":PAGE:CHAN:SMU3:INAME 'Ipp'\")");
runcli("gpibsend(0x1,\":PAGE:CHAN:SMU3:MODE V\")");
runcli("gpibsend(0x1,\":PAGE:CHAN:SMU3:FUNCTION CONSTANT\")");
#runcli("gpibsend(0x1,\":PAGE:CHAN:SMU3:DIS\")");
runcli("gpibsend(0x1,\":PAGE:CHAN:SMU4:DIS\")");
runcli("gpibsend(0x1,\":PAGE:CHAN:VSU1:DIS\")");
# Voltage Source Unit DISabled
runcli("gpibsend(0x1,\":PAGE:CHAN:VSU2:DIS\")");
runcli("gpibsend(0x1,\":PAGE:DISP:LIST:SELECT 'Vout'\")");
# DISPlay LIST
runcli("gpibsend(0x1,\":PAGE:DISP:LIST:SELECT 'Iout'\")");
runcli("gpibsend(0x1,\":PAGE:DISP:LIST:SELECT 'Vcc'\")");
runcli("gpibsend(0x1,\":PAGE:DISP:LIST:SELECT 'Icc'\")");
runcli("gpibsend(0x1,\":PAGE:MEAS:SWEEP:VAR1:MODE SINGLE\")");
# Single Stair Sweep
runcli("gpibsend(0x1,\":PAGE:MEAS:SWEEP:VAR1:SPACING LINEAR\")");
# The sweep is incremented (decremented) by the
# stepsize until the stop value is reached.
runcli("gpibsend(0x1,\":PAGE:MEAS:SWEEP:VAR1:START 2.8\")");
# Setting the sweep range of Vout
runcli("gpibsend(0x1,\":PAGE:MEAS:SWEEP:VAR1:STOP 18\")");
runcli("gpibsend(0x1,\":PAGE:MEAS:SWEEP:VAR1:STEP 0.1\")");
runcli("gpibsend(0x1,\":PAGE:MEAS:SWEEP:VAR1:COMPLIANCE 0.05\")");
# Compliance: meaning the stable state of voltage, on the parametric
analyzer
runcli("gpibsend(0x1,\":PAGE:MEAS:SWEEP:VAR1:PCOMPLIANCE:STATE
0\")");
# PCOMPLIANCE: Might be the state before the stable state
runcli("gpibsend(0x1,\":PAGE:MEAS:DEL 2\")");
# Delay
runcli("gpibsend(0x1,\":PAGE:MEAS:HTIM 50\")");
# Hold Time
runcli("gpibsend(0x1,\":PAGE:MEAS:SWEEP:CONS:SMU2:SOURCE 3.3\")");
# Setting the values for VCC
runcli("gpibsend(0x1,\":PAGE:MEAS:SWEEP:CONS:SMU2:COMP 0.1\")");
runcli("gpibsend(0x1,\":PAGE:MEAS:SWEEP:CONS:SMU3:SOURCE 12\")");
# Setting the values for VPP
runcli("gpibsend(0x1,\":PAGE:MEAS:SWEEP:CONS:SMU3:COMP 0.1\")");
runcli("gpibsend(0x1,\":PAGE:SCON:SING\")");
sleep(2);
runcli("ctst");
runcli("stst");
sleep(2);
runcli("pu;rs");
runcli("B16R_vpasscp_vpp.txt()");
runcli("regaccess(static_load,0x9,0x9,$k)");
# Using the Cregs 0x9 to modulate the frequency
runcli("adputr(0xcf,0x03)");
runcli("rs");
poll_4156c();
}
sub poll_4156c{
runcli("gpibsend(0x1,\":STAT:OPER:COND?\");");
# This command returns the present status of the Operation
# Status "CONDITION" register. Reading this register does not clear
it.
runcli("gpibreceive(0x1);");
while((get_gpib_status() > 0)&&($foundError < 1) ){
sleep(3);
runcli("gpibsend(0x1,\":STAT:OPER:COND?\")");
runcli("gpibreceive(0x1);");
}
}#end poll_4156c subroutine
sub get_gpib_status{
# get file info
$host_meas = getHost();
# Retrieve the nextest station detail, will return something like
mav2pt
- 0014
$file_meas = $host_meas."_temp.cli";
# Define the file_meas as the nextest cli temporary file, Contains
all
the text as displayed on Nextest CLI
open(STATUS, "$file_meas" ) || die("Can't open logfile: $!");
print "\nSTATUS received from GPIB:";
while(<STATUS>)
{
if(/Error/){
runcli("gpibinit;");
$foundError = 1;
}
else
{
chomp;
s/\+//g;
$value = $_;
$foundError = 0;
}
} # End of while(<INMEAS>) loop.
close(STATUS);
#print "value = $value";
return($value);
}#End of get_gpib_status subroutine.
$i=$i+1;
}
$k=$k+8;
}
Ad 1. /Error/ is a regular expression matching any string that contains "Error" as its substring; in this context (if(/Error/) it is evaluated as boolean expression, ie. it is true if match is found in $_ variable;
Ad. 2. chomp removes trailing newline character from its argument, or in lack of argument, from $_ variable; s/\+//g; replaces any + signs found in $_ with nothing (ie. removes them). As you might have already noticed, many Perl constructs operate on $_ if not specified otherwise.
get_gpib_status is defined in your script - sub get_gpib_status is the beginning of definition (in Perl, functions are defined using sub keyword, stands for "subroutine").
And finally, I offer my sincere condolences. Giving the task of rewriting a program from one language to another to someone without prior experience with either language is probably not the stupidest managerial decision I've heard of, but high on the list.