I am using chef to test software. Thus the file name and download location of said software dynamic/would be passed in as an attribute.
Note that I have to use the chef scripts and recipes that our operations team is using, as part of the test procedure. They have the values in question at the environmental level and the default.rb cookbook level. They use a ruby script to setup the VM via knife openstack && add that server to chef via the REST api:
Chef::Config.from_file("/root/.chef/knife.rb")
rest = Chef::REST.new(CHEF_API)
newserver=
{
:name => server.hostname,
:chef_type => "node",
:chef_environment => server.environment,
:json_class => "Chef::Node",
:attributes => {
:cobbler_profile => server.profile
},
:overrides => {
},
:defaults => {
},
:run_list => server.roles
}
begin
result = rest.post_rest("/nodes",newserver)
....
Ideally the file name and location would be passed into the python app as command line parameters, and then use knife or pychef (or ruby, if I have to...) to set/override the existing node-level attributes.
The method that they use to add the server leaves out the -j option that I've seen in other similar questions.
I have tried knife node edit - but that requires the use of an editor..
I have tried
node = chef.Node('myNode')
node.override['testSoftware']['downloads']['testSoftwareInstaller'] = 'http://location/of/download'
node.save()
But node.override['testSoftware']['downloads']['testSoftwareInstaller'] subsequently returns the original value (and can be seen as the original in the UI). It seems that you can only set new attributes this way - but not edit/overwrite existing ones.
I am contemplating simply generating the environmental.json file dynamically... but would prefer to not deviate from what operations is using.
I am quite new to chef, and you probably don't even need this after 3 years, but... I think you should be using node['override']['attribute'] instead of node.override['attribute']. The former is for setting values, the latter for getting values.
I am not saying this will work, since I haven't used chef with python, but I think that's the way it works.
Related
So I was wondering if it's possible to create a script that checks in a node is offline and if it is it should bring it back online. The login used should be by username and token.
I'm talking about a script that triggers this button on the right:
TL;DR: the scripted action for that button is .doToggleOffline:
Jenkins.instance.getNode('Node-Name').getComputer().doToggleOffline(offlineMessage)
I knew I had dealt with this before but did not recall the cliOnline() command. In looking it up I noticed it was deprecated. Turns out I used a different approach.
Can't say I fully understand the possible states and their applicability as it's not well-documented. The table shown below is as reflected in the Build Executor Status side panel; the /computer Manage nodes and clouds table will only show the computer w/ or w/o X.
// Connect (Launch) / Disconnect Node
Jenkins.instance.getNode('Node-Name').getComputer().launch()
Jenkins.instance.getNode('Node-Name').getComputer().disconnect()
// Make this node temporarily offline (true) / Bring this node back online (false)
Jenkins.instance.getNode('Node-Name').getComputer().setTemporarilyOffline(false, OfflineCause cause)
// Availabiltiy: Accepting Tasks (true) / Not Accepting Tasks (false)
Jenkins.instance.getNode('Node-Name').getComputer().setAcceptingTasks(true)
The isAcceptingTasks() JavaDoc explains this as:
Needed to allow agents programmatic suspension of task scheduling that
does not overlap with being offline.
The isTemporarilyOffline() JavaDoc elaborates:
Returns true if this node is marked temporarily offline by the user.
In contrast, isOffline() represents the actual online/offline state
JavaDoc for isOffline (both Temporarily and Disconnected), setTemporarilyOffline and setAcceptingTasks.
But, after all that, turns out there's one more option:
def offlineMessage = "I did it"
Jenkins.instance.getNode('Node-Name').getComputer().doToggleOffline(offlineMessage)
And if you run that from the groovy console, it toggles the state (so I guess you check state first):
And run it again:
My experience relates to: JENKINS-59283 - Use distinct icon for disconnected and temporarily offline computers / PR-4195 and having brought agents on-line when they should have been unavailable per schedule (Node Availability: Bring this agent online according to a schedule) so nothing ran. The PR was to introduce a yellow X for the Not Accepting but On-line condition, but the icons have now changed.
If you want to simply make temporarily disabled nodes online you can use the following script to do this.
def jenkinsNodes = Jenkins.instance.getNodes()
def nodeLabelToMatch = "label1"
for(def node: jenkinsNodes) {
if(node.labelString.contains(nodeLabelToMatch)) {
if (node.getComputer().isOffline()){
node.getComputer().cliOnline()
}
}
}
Update : Full Pipeline
The script is written in groovy
pipeline {
agent any
stages {
stage('Hello') {
steps {
script {
def jenkinsNodes = Jenkins.instance.getNodes()
def nodeLabelToMatch = "label1"
for(def node: jenkinsNodes) {
if(node.labelString.contains(nodeLabelToMatch)) {
if (node.getComputer().isOffline()){
node.getComputer().cliOnline()
}
}
}
}
}
}
}
}
Non-Depricated Method.
If you look at this depricated method, it simply calls a non depricated method setTemporarilyOffline(boolean temporarilyOffline, OfflineCause cause). So instead of using cliOnline() you can use setTemporarilyOffline. Check the following.
node.getComputer().setTemporarilyOffline(false, null)
Some proper code with a proper cause. The cause is not really needed when setting the node online though.
import hudson.slaves.OfflineCause.UserCause
def jenkinsNodes = Jenkins.instance.getNodes()
for(def node: jenkinsNodes) {
if (node.getComputer().isTemporarilyOffline()){
node.getComputer().setTemporarilyOffline(false, null)
}
}
Setting to temporarily offline
UserCause cause = new UserCause(User.current(), "This is a automated process!!")
node.getComputer().setTemporarilyOffline(true, cause)
TL;DR
I have a VSCode extension acting as a client for an LSP server written in Python (using the pygls library) and I can't seem to get basic requests sent to my LSP server from the extension.
The Longer Version
I'm working on an LSP server for a custom YAML-based language and have run into some issues I can't seem to resolve. Specifically, our tool is written in Python, so I'm using pygls to ease the creation of the language server and am creating a VSCode extension to handle the client side of things.
At this stage, my goal is to get a very basic hover functionality to work where I've hard-coded the hover text to be displayed to the user. Unfortunately, I have yet to be able to get this to work as it doesn't seem like my client is correctly sending the request to the server, or my server is not correctly handling it.
To try and resolve this, I have:
Looked through several examples of extensions to see how my VSCode extension differs (to name a few):
Microsoft/vscode-extension-samples: lsp-sample
openlawlibrary/pygls: json-extension
Eugleo/magic-racket
Gone through the LSP Specification to know which parameters need to be passed for the Hover request.
Gone through the VSCode Langage Server Extension Guide
Looked through the vscode-Languageserver-node code base for hints as to why my hover handler (in the server) is never getting called, etc.
Some things I noticed that were different with what I'm doing versus what others have:
Many extensions are using TypeScript to write the server; I'm using Python.
Some extensions do not add the call to client.start() to the context.subscriptions; I do.
Bonus: I get the sense that I should be sending the initialize request and the initialized notification before expecting any of the hovering to work but, from what I've seen, no other extension I've come across explicitly sends either of those. (Additionally, just because I was curious, I tried and it still didn't provide any different results.)
At this point, I'm really not sure where I'm going wrong - any insights/pointers are greatly appreciated. Thanks!
The relevant parts of my server implementation are as follows:
server.py
# imports elided
# we start the server a bit differently, but the essence is this:
server = LanguageServer()
server.start_ws("127.0.0.1", 8080)
#server.feature(methods.HOVER)
async def handle_hover(ls: LanguageServer, params: HoverParams):
"""Handle a hover event."""
logger.info(f"received hover request\nparams are: {params.text_document.uri}")
ls.show_message("received hover request")
ls.show_message(f"file: {params.text_document.uri}; line: {params.position.line}; character: {params.position.character}")
return Hover(contents="Hello from your friendly AaC LSP server!")
The relevant parts of the client (I think, I don't use VSCode so I may be missing something) are as follows:
AacLanguageServer.ts
// imports elided
// extension.ts (not this file) contains a call to the startLspClient(...) function
export class AacLanguageServerClient {
private static instance: AacLanguageServerClient;
private aacLspClient!: LanguageClient;
// certain checks are elided for brevity
private startLspClient(context: ExtensionContext, aacPath: string, host: string, port: number): void {
if (this.aacLspClient) { return; }
this.aacLspClient = new LanguageClient(
"aac",
"AaC Language Client",
this.getServerOptions(aacPath, "start-lsp", "--host", host, "--port", `${port}`),
this.getClientOptions(),
);
this.aacLspClient.trace = Trace.Verbose;
context.subscriptions.push(this.aacLspClient.start());
this.registerHoverProvider(context);
}
private async registerHoverProvider(context: ExtensionContext): Promise<void> {
const client = this.aacLspClient;
context.subscriptions.push(languages.registerHoverProvider({ scheme: "file", language: "aac", pattern: "**/*.yaml" }, {
provideHover(document, position, token): ProviderResult<Hover> {
window.showInformationMessage(
`File: ${document.uri.path}; Line: ${position.line}; Character: ${position.character}`
);
return client.sendRequest("textDocument/hover", {
textDocument: document,
position: position,
}, token);
}
}));
}
private getServerOptions(command: string, ...args: any[]): ServerOptions {
return {
args,
command,
};
}
private getClientOptions(): LanguageClientOptions {
return {
documentSelector: [
{ scheme: "file", language: "aac", pattern: "**/*.aac" },
{ scheme: "file", language: "aac", pattern: "**/*.yaml" },
],
diagnosticCollectionName: "aac",
outputChannelName: "Architecture-as-Code",
synchronize: {
fileEvents: workspace.createFileSystemWatcher("**/.clientrc"),
}
};
}
}
When I debug this in VSCode, I can see that the server is started in the session but if I hover over any symbol in the shown file, nothing happens. Can someone
Finally, all the code (in it's current state as of the time of posting this question) can be found in this repository, if a more complete code reference is desired. The LSP server-related code is located in the python/src/aac/lang/server.py file; and the client-related code is located in the vscode_extension/src/AacLanguageServer.ts file.
NOTE: This is still an incomplete answer! That's why I haven't accepted it.
I will update this when I figure out how to get the TCP and WebSockets servers working, but apparently a "good-enough" way to get the VSCode extension and my LSP server to communicate is by using the pygls I/O server for everything regardless of being in development mode.
Just creating a new document and setting the language to 'aac' wasn't enough.
But saving it with the extension .aac made the server active and "data" and "import" were suggested, so I tried to hover a word, and look - I got a response:
And the output to the channel:
By the way, I think that I found a bug. Shouldn't the error messages in lines 39-40 in AacLanguageServerClient.ts use variable name instead of item?
assertTrue(item.length > 0, `Cannot start Language Server; '${name}' is not configured!`);
assertTrue(fs.existsSync(item), `Cannot use ${name} as it does not exist!`);
I'm trying to figure out how to connect pulumi_azure.compute.LinuxVirtualMachineScaleSet instance to a backend pool of pulumi_azure.network.ApplicationGateway using Python.
Looking at documentation of pulumi_azure.compute.LinuxVirtualMachineScaleSet
( https://www.pulumi.com/docs/reference/pkg/azure/compute/linuxvirtualmachinescaleset )
it seems that the chain of necessary links would be:
step 1 - create LinuxVirtualMachineScaleSetNetworkInterfaceIpConfiguration instance with appropriate applicationGatewayBackendAddressPoolIds set
step 2 - create LinuxVirtualMachineScaleSetNetworkInterface instance with network interface ip configuration from step 1
step 3 - create LinuxVirtualMachineScaleSet with network_interface from step 2
However while this is what documentation says
LinuxVirtualMachineScaleSetNetworkInterfaceIpConfiguration
and
LinuxVirtualMachineScaleSetNetworkInterface
are not defined in pulumi_azure.compute (version 3.17.0, newest as of this writing).
Looking at code samples in both documentation and pulumi_azure.compute's source code, the only way to set network_interfaces argument to LinuxVirtualMachineScaleSet is to provide it with a list of dictionaries, e.g.
network_interfaces=[{
"name": "example",
"primary": True,
"ip_configurations": [{
"name": "internal",
"primary": True,
"subnet_id": ....
}],
"network_security_group_id": ...
}],
So what would be the correct way to associate scaling set with application gateway's backend pool?
After looking through source code of pulumi_azure/compute/linux_virtual_machine_scale_set.py I realized that LinuxVirtualMachineScaleSetNetworkInterfaceIpConfiguration etc mentioned in pulumi's documentation are not classes, but plain dictionaries.
ip_configuration field of network_interface argument to scale set constructor accepts optional applicationGatewayBackendAddressPoolIds keyword, which can be used to associate scale set with application gateway's backend pool.
Ive been playing around with the python bindings for libtorrent/rasterbar.
What I wanted to do was generate a new 'node-id' and reannounce it to the other nodes.
I read that a 'bencoded dicionary' needs to be created and I assume announced using something like force_dht_reannounce, is this correct?
You can force libtorrent to use a specific node ID for the DHT by crafting a session-state file, and feed it to the session::load_state() function. Once you do this, you also need to restart the DHT by calling session::stop_dht() followed by session::start_dht().
The relevant parts of the session state you need to craft have the following format (bencoded):
{
"dht state": {
"node-id": "<20-byte binary node-ID>"
}
}
If you want to keep the rest of the session state, it might be a good idea to first call session::save_state() and then simply insert/overwrite the node-id field.
Something like this:
state = ses.save_state()
state["dht state"]["node-id"] = "<...>";
ses.load_state(state)
ses.stop_dht()
ses.start_dht()
So I want to create a very simple structure out of group and locator nodes in Maya which will then be exported for use in my game level.
e.g.
Group_Root
group_parent
- group1
- locator1
- group2
- locator2
- group3
There is only one Group_Root in the file, there are many group_parents ( each uniquely named )
However all group_parent's have the same three sub-group names( "group1", "group2", "group3" ) and all group1 have a locator called locator1
What I have so far is:
group_parent = c.group( em=True, name="group_parent", parent="Group_Root")
modes = ["group1", "group2", "group3"]
for mode in modes:
mode_group = c.group( em=True, n=mode, parent=group_parent )
if mode == "group1":
s = c.spaceLocator(name="locator1")
c.parent( mode_group )
elif mode == "group3":
s = c.spaceLocator(name="locator2")
c.parent( mode_group )
However I get this error at "c.parent(mode_group)"
# Error: Object group1 is invalid
Presumably because there are more than one node called "group1" so it doesn't know which one to parent.
Any idea how I do this with full paths? e.g. "Group_Root|group_parent|group1"
Have you seen VFX Overflow? It's Q&A for visual effects, so I'd expect a number of the watchers to be quite familiar with Maya/MEL and Python. That said, it is fairly new so the user base is still small...
Names can be a bit annoying with MEL. In general, it's good practice to never trust a name to be what you're specifying.
This is a good example of how *not to do things:
group -n myGroup1 circle1 sphere1;
..because that is in no way guaranteed to result in something named "group1". The better way to do it is to run your command and capture the result in a string variable, such as:
string $result = `group -n myGroup circle1 sphere1`;
Then, use $result to refer to the resulting group. That will still work, even if the group ended up being called 'myGroup23'.
I'm not sure how the above looks in Python, as I'm mainly familiar with straight MEL, but the same principles should apply.
Another thing to look at is the namespace functionality (namespace and namespaceInfo), which could be used to define a new namespace for the unique, top-leve group at hand.
Hope that helps
I'm guessing that it being over two years, you've figured this one out by now.. but for posterity, there were two issues - firstly, you were spot on with the need for absolute paths, but there was also a slight bug in the way you were applying the maya.cmds.parent() call. I've just done some light rewriting to illustrate - mainly you could use the fact that when you create things they become selected by default, and maya.cmds.ls() is smart enough to return you what you need.. Ergo:
c.group( em=True, name="group_parent", parent="Group_Root")
group_parent = c.ls(sl=True)[0]
modes = ["group1" , "group2", "group3"]
for mode in modes:
c.group( em=True, n=mode, parent=group_parent )
mode_group = c.ls(sl=True)[0]
if mode == "group1":
c.spaceLocator(name="locator1")
s = c.ls(sl=True)[0]
# maya.cmds.parent() with something selected will actually
# parent the specified object to the selected object.
# You don't want that.
# We might as well use the explicit syntax to be sure
# (parent everything specified to the last item in the list)
c.parent( s, mode_group )
elif mode == "group3":
c.spaceLocator(name="locator2")
s = c.ls(sl=True)[0]
c.parent( s, mode_group )