I am writing a Python script that will manage multiple Oracle databases on a single box. Each database has its own OracleService, but they all run under one TNSListener. Because each computer's install might name things differently I want to make this as dynamic as possible.
First, I need to start the TNSListener service. Most of these are on local laptops that only start the listener when we are going to use an Oracle database. In addition, some laptops run different versions of Oracle so the actual service name is different. For this I need to be able to find the full service name or names that contains the string 'TNSListener'.
Second, all of the OracleService names will be appended by the instance name (i.e., OracleServiceTESTING1). So I need to get a list of all the OracleServices on the machine and then display a selection of the instances based on the appended portion of the service names.
I thought about accessing the registry and trying to pull services from there, but the overhead to parse through that seems excessive. I'm just looking for some general guidance on how to find all services that match the string 'TNSListener' and 'OracleService'.
I would recommend a library like pywinservicemanager. A short code example to check if a particular service exists would look like this:
from pywinservicemanager.WindowsServiceConfigurationManager import ServiceExists
serviceName = 'TestService'
serviceExists = ServiceExists(serviceName)
print serviceExists
Related
I use tldextract (version 2.2.2) to extract subdomain/domain/suffix from URLs.
I recently noticed a result that I was surprised by:
>>> from tldextract import extract
>>> extract('http://althawrah.ye/archives/597366')
ExtractResult(subdomain='', domain='', suffix='althawrah.ye')
Instead of being picked up as the domain, althawrah is picked up as part of the suffix. Why is this?
Snooping around a bit, I notice in the Public Suffice List itself that .ye is one of a small number of suffixes that uses a leading asterisk, e.g.
// fj : https://en.wikipedia.org/wiki/.fj
*.fj
// ye : http://www.y.net.ye/services/domain_name.htm
*.ye
The implication here is that these suffixes do not allow domain names to be registered directly under the suffix, but instead must be registered as a third level name. However, this is not the case with http://althawrah.ye/; that is, althawrah is not listed as a second-level domain of .ye. So, what is going on here?
Based on the history of the list and the description of the process for updating, it looks like the Yemen entry is simply wrong or out of date. The entry was added before 2007 (when the list was migrated from CVS to git), while the list guidelines state that:
Changes [for ICANN Domains] need to either come from a representative of the registry (authenticated in a similar manner to below) or be from public sources such as a registry website.
The website linked in the list (which hasn't changed since 2002) gives little detail but does mention URLs of the format www.yourcompany.com.ye, which is where the *.ye rule presumably came from. IANA's root zone database specifies TeleYemen as the current TLD manager, but there is no mention of domain registration on their site. The Wikipedia list of supposed "second level domains" was added in 2008 by a Canadian user linking to a since-deleted website of a company called phpcomet (archived here) which claimed to sell domains in the listed second level domains. However, a google search for "site:ye" reveals plenty of sites outside those domains (e.g. press24.ye, ndc.ye) and fails to give any result for many of them (me.ye, co.ye, ltd.ye, plc.ye).
I'm not sure what could be done to update the official list, but I wouldn't be surprised if the correct entry would read something like:
ye
com.ye
edu.ye
gov.ye
org.ye
These changes were merged into publicsuffix/list in pull request 1189, thanks to TeleYemen and the project maintainers.
The list now specifies subdomains explicitly and drops the * asterisk.
As MS Support recently told me that using a "GET" is much more efficient in RUs usage than a sql query. I'm wondering if I can (within the azure.cosmos python package or a custom HTTP request to the REST API) get a document by its unique 'id' field (for which I generated a GUIDs) without an SQL Query.
Every example shown are using the link/path of the doc which is built with the '_rid' metadata of the document and not the 'id' field set when creating the doc.
I use a bulk upsert stored procedure I wrote to create my new documents and never retrieve the metadata for each one of them (I have ~ 100 millions docs) so retrieving the _rid would be equivalent to retrieving the doc itself.
The reason that the ReadDocument method is so much more efficient than a SQL query is because it uses _rid instead of a user generated field, even the required id field. This is because the _rid isn't just a unique value, it also encodes information about where that document is physically stored.
To give an example of how this works, let's say you are explaining to someone where a party is this weekend. You could use the name that you use for the house "my friend Ryan's house" or you could use the address "123 ThatOne Street Somewhere, WA 11111". They both are unique identifiers, but for someone trying to get there one is way more efficient than the other.
Telling someone to go to your friend's house is like using your own id. It does map to a specific house, but the person will still need to find out where that physically is to get there. Using the address is like working with the _rid field. Based on that information alone they can get to the party location. Of course, in the real world the person would probably need directions, but the data storage in a database is a lot more organized than most city streets so an address is sufficient to go retrieve the document.
If you want to take advantage of this method you will need to find a way to work with the _rid field.
I have deployed a small application to Heroku. The slug contains, among other things, a list in a textfile. I've set a scheduled job to, once an hour, run a python script that select an item from that list, and does something with that item.
The trouble is that I don't want to select the same item twice in sequence. So I need to be able to store the last-selected item somewhere. It turns out that Heroku apparently has a read-only filesystem, so I can't save this information to a temporary or permanent file.
How can I solve this problem? Can I use os.environ in python to set a configuration variable that stores the last-selected element from the list?
Have to agree with #KlausD, doing what you are suggesting is actually a bit more complex trying to work with a filesystem that won't change and tracking state information (last selected) that you may need to persist. Even if you were able to store the last item in some environmental variable, a restart of the server would lose that information.
Adding a db, and connecting it to python would literally take minutes on Heroku. There are plenty of well documented libraries and ORMs available to create a simple model for you to store your list and your cursor. I normally recommend against storing pointers to information in preference to making the correct item obvious due to the architecture, but that may not be possible in your case.
This is stated in the Google Cloud Storage Naming Best Practices documentation.
Don't use user IDs, email addresses, project names, project numbers, or any personally identifiable information (PII) in bucket or object names because anyone can probe for the existence of a bucket or object, and use the 403 Forbidden, 404 Not Found and 409 Conflict errors to determine the bucket or object's name. Also, URLs often end up in caches, browser history, proxy logs, shortcuts, and other locations that allow the name to be read easily.
This sort of puts a strain on where I was headed with my application, and how it is structured. I really want to avoid handling/storing Cloud Storage paths via CloudSQL or DataStore.
I'm writing this in Python on Google App Engine, and a good amount of my code for GCS is based off of the username as of right now. For example, a user would always upload his/her file within the folder (username) which he/she has registered as. A lot of the path logic I currently have, utilizes the User variable for GCS.
Could someone possibly recommend a way in which I would be following their guidelines, while still having the capability to use a single variable to call the a user directory? By that I mean without assigning the folder as the users ID. I would need to be able to reference this variable without accessing SQL or Datastore at any given time.
Any help would greatly appreciated!
Usernames and filenames can have PII. For example: JeffreyRennieHasWarts.pdf. So they all must be hidden.
One method is to encrypt the object names. The good news is that Google just announced a Key Management Service that makes this a lot easier. See:
https://cloud.google.com/kms/
Another method, as jterrace mentioned, is to salt and hash the username to create a user key. It would look something like:
user_key = hmac.new("username", mysecretsalt, hashlib.sha256).hexdigest()
But that still leaves the problem of file names. To hide the original file name, you'd give the objects meaningless names, and store a separate object whose contents are the original name of the file. So your object names might look like
userkey1/GUID1.contents
userkey1/GUID1.name
userkey1/GUID2.contents
userkey1/GUID2.name
userkey2/GUID3.contents
userkey2/GUID3.name
The best choice will depend on how you plan to query the data stored in cloud storage.
I have a list of users from an external source and a remote machine. I want to take the list from the external source and compare it against my current machine's user list and create users for each user that does not exist on the machine.
I have tried this using ansible runner (pseudo-code below):
for user in users:
updateUsers(user)
which will call an Ansible.runner object and make the following call:
ansible.runner.Runner(
pattern='tools',
forks=10,
module_name='user',
module_args="",
complex_args=OrderedDict(sorted(dict(name=name, group=group, state=state).items())),
sudo=True,
).run()
For now group and state are defined globally.
My issue is that as this traverses the for-loop, it does indeed create users as I have specified it to do so; the main issue is that it creates users and then the permissions on the home directories of each respective user do not allow the user access to it. So say "joeshmo" was a user: he would not be able to write to his own ~/ dir.
I am looking for some guidance on how I'm doing this.
Is there a way with playbook to dynamically iterate through a file and grab different user names to add them as users to the system without the permission errors?
Is there a way to fix my current script to not have these errors?
Thank you
If you really need to use code to get your users list, you can write your own iterators in python and put them in a lookup_plugins dir next to your playbook. Then you can do this:
# Use my custom users.py lookup plugin
- user: name={{item.name}} group={{item.group}} state=present
with_users:
Here's how I'd do it.
Create a custom facts script to get all your user/group info from the machine during fact gathering. This file can be any shell executable that output valid JSON
Compare the list of fact gathered users/groups with your external source (not sure what format you have it stored in) and add/delete users appropriately.
I know there is a bit of overhead with creating, deploying and managing an additional script for custom facts but I think it's worth it. I've attempted to gather all sorts of information using just playbook and it can get really ugly and you'll end up writing all sorts of filter plugins, curse the set_fact module and end up with a 200 line playbook.