Ive been playing around with the python bindings for libtorrent/rasterbar.
What I wanted to do was generate a new 'node-id' and reannounce it to the other nodes.
I read that a 'bencoded dicionary' needs to be created and I assume announced using something like force_dht_reannounce, is this correct?
You can force libtorrent to use a specific node ID for the DHT by crafting a session-state file, and feed it to the session::load_state() function. Once you do this, you also need to restart the DHT by calling session::stop_dht() followed by session::start_dht().
The relevant parts of the session state you need to craft have the following format (bencoded):
{
"dht state": {
"node-id": "<20-byte binary node-ID>"
}
}
If you want to keep the rest of the session state, it might be a good idea to first call session::save_state() and then simply insert/overwrite the node-id field.
Something like this:
state = ses.save_state()
state["dht state"]["node-id"] = "<...>";
ses.load_state(state)
ses.stop_dht()
ses.start_dht()
Related
So I was wondering if it's possible to create a script that checks in a node is offline and if it is it should bring it back online. The login used should be by username and token.
I'm talking about a script that triggers this button on the right:
TL;DR: the scripted action for that button is .doToggleOffline:
Jenkins.instance.getNode('Node-Name').getComputer().doToggleOffline(offlineMessage)
I knew I had dealt with this before but did not recall the cliOnline() command. In looking it up I noticed it was deprecated. Turns out I used a different approach.
Can't say I fully understand the possible states and their applicability as it's not well-documented. The table shown below is as reflected in the Build Executor Status side panel; the /computer Manage nodes and clouds table will only show the computer w/ or w/o X.
// Connect (Launch) / Disconnect Node
Jenkins.instance.getNode('Node-Name').getComputer().launch()
Jenkins.instance.getNode('Node-Name').getComputer().disconnect()
// Make this node temporarily offline (true) / Bring this node back online (false)
Jenkins.instance.getNode('Node-Name').getComputer().setTemporarilyOffline(false, OfflineCause cause)
// Availabiltiy: Accepting Tasks (true) / Not Accepting Tasks (false)
Jenkins.instance.getNode('Node-Name').getComputer().setAcceptingTasks(true)
The isAcceptingTasks() JavaDoc explains this as:
Needed to allow agents programmatic suspension of task scheduling that
does not overlap with being offline.
The isTemporarilyOffline() JavaDoc elaborates:
Returns true if this node is marked temporarily offline by the user.
In contrast, isOffline() represents the actual online/offline state
JavaDoc for isOffline (both Temporarily and Disconnected), setTemporarilyOffline and setAcceptingTasks.
But, after all that, turns out there's one more option:
def offlineMessage = "I did it"
Jenkins.instance.getNode('Node-Name').getComputer().doToggleOffline(offlineMessage)
And if you run that from the groovy console, it toggles the state (so I guess you check state first):
And run it again:
My experience relates to: JENKINS-59283 - Use distinct icon for disconnected and temporarily offline computers / PR-4195 and having brought agents on-line when they should have been unavailable per schedule (Node Availability: Bring this agent online according to a schedule) so nothing ran. The PR was to introduce a yellow X for the Not Accepting but On-line condition, but the icons have now changed.
If you want to simply make temporarily disabled nodes online you can use the following script to do this.
def jenkinsNodes = Jenkins.instance.getNodes()
def nodeLabelToMatch = "label1"
for(def node: jenkinsNodes) {
if(node.labelString.contains(nodeLabelToMatch)) {
if (node.getComputer().isOffline()){
node.getComputer().cliOnline()
}
}
}
Update : Full Pipeline
The script is written in groovy
pipeline {
agent any
stages {
stage('Hello') {
steps {
script {
def jenkinsNodes = Jenkins.instance.getNodes()
def nodeLabelToMatch = "label1"
for(def node: jenkinsNodes) {
if(node.labelString.contains(nodeLabelToMatch)) {
if (node.getComputer().isOffline()){
node.getComputer().cliOnline()
}
}
}
}
}
}
}
}
Non-Depricated Method.
If you look at this depricated method, it simply calls a non depricated method setTemporarilyOffline(boolean temporarilyOffline, OfflineCause cause). So instead of using cliOnline() you can use setTemporarilyOffline. Check the following.
node.getComputer().setTemporarilyOffline(false, null)
Some proper code with a proper cause. The cause is not really needed when setting the node online though.
import hudson.slaves.OfflineCause.UserCause
def jenkinsNodes = Jenkins.instance.getNodes()
for(def node: jenkinsNodes) {
if (node.getComputer().isTemporarilyOffline()){
node.getComputer().setTemporarilyOffline(false, null)
}
}
Setting to temporarily offline
UserCause cause = new UserCause(User.current(), "This is a automated process!!")
node.getComputer().setTemporarilyOffline(true, cause)
I'll preface this by saying I'm fairly new to BigQuery. I'm running into an issue when trying to schedule a query using the Python SDK. I used the example on the documentation page and modified it a bit but I'm running into errors.
Note that my query does use scripting to set some variables, and it's using a MERGE statement to update one of my tables. I'm not sure if that makes a huge difference.
def create_scheduled_query(dataset_id, project, name, schedule, service_account, query):
parent = transfer_client.common_project_path(project)
transfer_config = bigquery_datatransfer.TransferConfig(
destination_dataset_id=dataset_id,
display_name=name,
data_source_id="scheduled_query",
params={
"query": query
},
schedule=schedule,
)
transfer_config = transfer_client.create_transfer_config(
bigquery_datatransfer.CreateTransferConfigRequest(
parent=parent,
transfer_config=transfer_config,
service_account_name=service_account,
)
)
print("Created scheduled query '{}'".format(transfer_config.name))
I was able to successfully create a query with the function above. However the query errors out with the following message:
Error code 9 : Dataset specified in the query ('') is not consistent with Destination dataset '{my_dataset_name}'.
I've tried changing passing in "" as the dataset_id parameter, but I get the following error from the Python SDK:
google.api_core.exceptions.InvalidArgument: 400 Cannot create a transfer with parent projects/{my_project_name} without location info when destination dataset is not specified.
Interestingly enough I was able to successfully create this scheduled query in the GUI; the same query executed without issue.
I saw that the GUI showed the scheduled query's "Resource name" referenced a transferConfig, so I used the following command to see what that transferConfig looked like, to see if I could apply the same parameters using my Python script:
bq show --format=prettyjson --transfer_config {my_transfer_config}
Which gave me the following output:
{
"dataSourceId": "scheduled_query",
"datasetRegion": "us",
"destinationDatasetId": "",
"displayName": "test_scheduled_query",
"emailPreferences": {},
"name": "{REDACTED_TRANSFER_CONFIG_ID}",
"nextRunTime": "2021-06-18T00:35:00Z",
"params": {
"query": ....
So it looks like the GUI was able to use "" for destinationDataSetId but for whatever reason the Python SDK won't let me use that value.
Any help would be appreciated, since I prefer to avoid the GUI whenever possible.
UPDATE:
This does appear to be related to the scripting I used in my query. I removed the scripts from the query and it's working. I'm going to leave this open because I feel like this should be possible using the SDK since the query with scripting works in the console without issue.
This same thing also threw me through a loop but I managed to figure out what was wrong. The problem is with the
parent = transfer_client.common_project_path(project)
line that is given in the example query. By default, this returns something of the form projects/{project_id}. However, the CreateTransferConfigRequest documentation says of the parent parameter:
The BigQuery project id where the transfer configuration should be created. Must be in the format projects/{project_id}/locations/{location_id} or projects/{project_id}. If specified location and location of the destination bigquery dataset do not match - the request will fail.
Sure enough, if you use the projects/{project_id}/locations/{location_id} format instead, it resolves the error and allows you to pass a null destination_dataset_id.
I had the exact same issue. the fix for the issue is as below.
The below method returns Projects/{projectid}
parent = transfer_client.common_project_path(project_id)
instead use the below method , which returns projects/{project}/locations/{location}
parent = transfer_client.common_location_path(project_id , "EU")
I had tried with the above change , i am able to schedule a script in BQ.
I want to implement a dynamic FTPSensor of a kind. Using the contributed FTP sensor I managed to make it work in this way:
ftp_sensor = FTPSensor(
task_id="detect-file-on-ftp",
path="./data/test.txt",
ftp_conn_id="ftp_default",
poke_interval=5,
dag=dag,
)
and it works just fine. But I need to pass dynamic path and ftp_conn_id params. I.e. I generate a bunch of new connections in a previous task and in the ftp_sensor task I want to check for each of the new connections that I previously generated if there's a file present on the FTP.
So I thought first to grab the connections' ids from XCom.
I send them from the previous task in XCom but it seems I cannot access XCom outside of tasks.
E.g. I was aiming at something like:
active_ftp_connections = context['ti'].xcom_pull(key='active_ftps')
for conn in active_ftp_connections:
ftp_sensor = FTPSensor(
task_id="detect-file-on-ftp",
path=conn['path'],
ftp_conn_id=conn['connection'],
poke_interval=5,
dag=dag,
)
but this doesn't seem to be a possible solution.
Then I just wasted a good amount of time trying to create my custom FTPSensor to which to pass dynamically the data I need but right now I reached to the conclusion that I need a hybrid between a sensor and operator, because I need to keep the poke functionality for instance but also have the execute functionality.
I guess one option is to write a custom operator that implements poke from the sensor base class but am probably too tired to try to do it now.
Do you have an idea how to achieve what I am aiming at? I can't seem to find any materials on the topic on the internet - maybe it's just me.
Let me know if the question is not clear so I can provide more details.
Update
I now reached to this as possibility
def get_active_ftps(**context):
active_ftp_connestions = context['ti'].xcom_pull(key='active_ftps')
return active_ftp_connestions
for ftp in get_active_ftps():
ftp_sensor = FTPSensor(
task_id="detect-file-on-ftp",
path="./"+ ftp['folder'] +"/test.txt",
ftp_conn_id=ftp['conn_id'],
poke_interval=5,
dag=dag,
)
but it throws an error: Broken DAG: [/usr/local/airflow/dags/copy_file_from_ftp.py] 'ti'
I managed to do it like this:
active_ftp_folder = Variable.get('active_ftp_folder')
active_ftp_conn_id = Variable.get('active_ftp_conn_id')
ftp_sensor = FTPSensor(
task_id="detect-file-on-ftp",
path="./"+ active_ftp_folder +"/test.txt",
ftp_conn_id=active_ftp_conn_id,
poke_interval=5,
dag=dag,
)
And will just have the dag run one ftp account at a time since I realized that there shouldn't be cycles in a direct acyclic graphs ... apparently.
I'm writing something for a game that involves networks. In this game, a network is a class and the "connections" to each node are formatted like:
network.nodes = [router, computer1, computer2]
network.connections = [ [1, 2], [0], [0] ]
Each iteration in "network.nodes" works in parallel with each iteration in "network.connections", meaning "network.connections[0]" represents all the nodes "network.nodes[0]" is connected to. I'm trying to write a simple function in the network class that finds a route starting from the router - "network.connections[0]" - and then to a specific "node". The more thought I put into this, the more complicated the answer seems to be.
In this, rather simple case it should return something like
[router, computer1]
That's what I'd like to see if I was trying to find a route to "computer1", but I need something that will work with more complicated network simulations.
It's basically a simulator for a computer network. But in this game, I need to be able to know exactly which nodes something might travel though to reach a specific target.
Any help would be greatly appreciated. Thanks.
How about dropping the .nodes and .connections and just keeping them in one data structure like a dictionary.
network.nodes = {"router": [computer1, computer2],
"computer1": [router],
"computer2": [router]
}
You could even drop the strings as keys and use the objects themselves:
network.nodes = {router: [computer1, computer2],
computer1: [router],
computer2: [router]
}
That way if you need to access the the connections for the router you would do:
>>>network.nodes[router]
[computer1, computer2]
Because I don't have a full overview of your project, I can't just give you a function to do that, but I can try and point you in the right direction.
If you build the network 'map' up as a dictionary, and network.nodes[router] returns [computer1, computer2], the next thing you would need to do is network.nodes[computer1] and network.nodes[computer2].
In your firewall example from the comments, you would rebuild the network map to include the firewall. So the dictionary would look like this:
network.nodes = {router: [firewall, computer2],
firewall: [computer1]
computer1: [firewall],
computer2: [router]
}
As you will be able to see from my previous questions, I have been working on a project, and really want to know how I can get this last part finished.
Quick summary of project: I have a Raspberry Pi that is running a Web Server (Lighttpd) and Flask. It has an RF USB Transmitter connected, which controls the power of a plug via a Python script. (Power.pyon GitHub). This works.
I now need to create an Endpoint in Flask so that Salesforce can send it some JSON, and it will understand it.
I want to keep this as simple as I can, so I understand what it's actually doing. In my last question, someone did provide me with something, but I thought it'd be better have a specific question relating to it, rather than trying to cover too much in one.
All I need to be able to send is 'power=on/off', 'device=0,1,2', 'time=(secondsasinteger)' and 'pass=thepassword' I can send this as URL variables, or a POST to my existing power.py linked above, and it does it.
I would like a simple, clear way of sending this from Salesforce in JSON, to Flask and make it understand the request.
Literally all I need to do now is go to: ip/cgi-bin/power.py?device=0&power=on&time=10&pass=password
That would load a Python script, and turn device 0 on for 10 seconds. (0 is unlimited).
How can I convert that to JSON? What code do I need to put into Flask for it to be able to comprehend that? Can I forward the variables onto the power.py so the Flask file only has to find the variables and values?
I have downloaded Postman in Chrome, and this allows me to send POST's to the Pi to test things.
Where can I find out more info about this, as a beginner?
Can I send something like this?
'requestNumber = JSONRequest.post(
"ip/api.py",
{
deviceid: 0,
pass: "password",
time: 60,
power: "on"
},'
I don't know how you can get saleforce to send a POST request with an associated JSON, but capturing it with Flask is fairly easy. Consider the following example:
from flask import request
from yourmodule import whatever_function_you_want_to_launch
from your_app import app
#app.route('/power/', methods=['POST'])
def power():
if request.headers['Content-Type'] == 'application/json':
return whatever_function_you_want_to_launch(request.json)
else:
return response("json record not found in request", 415)
when saleforce visits the url http://example.com/power/ your applications executes the power() function passing it, as a parameter, a dictionary containing the JSON contents. The whatever_function_you_want_to_launch function can use the dictionary to trigger whatever action you want to take, and return a response back to the power() function. The power() function would return this respose back to salesforce.
For example:
def whatever_function_you_want_to_launch(data):
device = data['deviceid']
power = data['power']
message = ""
if power == "on":
turn_power_on(device)
message = "power turned on for device " + device
else:
turn_power_off(device)
message = "power turned off for device " + device
return make_response(message, 200)
this is just a short example, of course. You'd need to add some additional stuff (e.g. handle the case that the JSON is malformed, or does not contain one of the requested keys).
in order to test the whole stuff you can also use curl command (available on Linux, don't know on other OSs) with this type of syntax:
curl -H "Content-type: application/json" -X POST http://localhost:5000/power/ -d '{"deviceid": "0", "pass": "password", "time": "60", "power": "on"}'