I'm using Python 2.7 and Zeep to call SuiteTalk v2017_2_0, the SOAP-based NetSuite web service API. The command I'm running is search like so:
from zeep import Client
netsuite = Client(WSDL)
TransactionSearchAdvanced = netsuite.get_type(
'ns19:TransactionSearchAdvanced')
TransactionSearchRow = netsuite.get_type('ns19:TransactionSearchRow')
# login removed for brevity
r = netsuite.service.search(TransactionSearchAdvanced(
savedSearchId=search, columns=TransactionSearchRow()))
Now the results of this include all the data I want but I can't figure out how (if at all) I can determine the display columns that the website would show for this saved search and the order they go in.
I figure I could probably netsuite.service.get() and pass the internalId of the saved search but what type do I specify? Along those lines, has anyone found a decent reference for all the objects, type enumerations, etc.?
https://stackoverflow.com/a/50257412/1807800
Check out the above link regarding Search Preferences. It explains how to limit columns returned to those only in the search.
Related
So I'm able to upload a file in SharePoint using office365 package, however in SharePoint it is not checked in and I had to manually do it. Is there any way function in that package to do check in?
here's my code:
def csv_to_sp(spfile_path,output):
ctx = Extract.connect()
target_folder = ctx.web.get_folder_by_server_relative_url(spfile_path)
with open(output,'rb') as content_file:
file_content = content_file.read()
target_folder.upload_file(output,file_content).execute_query()
response = 'Uploaded to sharepoint!'
return response
It sounds like the target library has one of the following going on which would cause the uploaded document to be checked out after upload:
Require Checkout is enabled. When this setting is enabled, new documents you upload are checked out to you by default. You can toggle this option from the Library Settings/Versioning Settings page as described here.
There are required fields without default values. When additional metadata fields are added to the library and are set to be required, there is an option to specify a default value. If no default value is specified, any new items uploaded to said library, will be checked out and visible only to the user who uploaded the document. These items will not become visible to anyone else (including admins) until the user does a manual check in, at which time the user would be forced to provide a value for the required field.
Since you did not reference having to specify values for fields, I suspect you probably have #1 going on in your library.
Try turning off the "Require Check Out" setting and retest.
I'm using Microsoft Azure and more specifically Active Directory. I'm currently using Python and the Graph REST API to create users. I would also like to add them to a mailing list in Exchange but by searching through the docs for hours and looking at various SO post it seems it's not supported. So I'm now looking at different Python libraries to possibly achieve that and haven't really found an answer.
Could someone please bring some light to this issue and perhaps show an example? All I'd like to do is to somehow programmatically and using Python add a member to a mailing list. Thank you in advance.
It seems that there is not an official MS Graph SDK for Python.
You can refer to the Python sample here.
For this application, you will use the Requests-OAuthlib library to make calls to Microsoft Graph.
This sample is to get the events in O365.
def get_calendar_events(token):
graph_client = OAuth2Session(token=token)
# Configure query parameters to
# modify the results
query_params = {
'$select': 'subject,organizer,start,end',
'$orderby': 'createdDateTime DESC'
}
# Send GET to /me/events
events = graph_client.get('{0}/me/events'.format(graph_url), params=query_params)
# Return the JSON result
return events.json()
What you need to modify is to call Add a member to a group
Im currently trying to retrieve data from the google trends site directly into python. By inspecting the "download csv"-button I was able to extract the underlying url of the file, which looks like this:
https://trends.google.com/trends/api/widgetdata/multiline/csv?req=%7B%22time%22%3A%222018-08-30%202019-08-30%22%2C%22resolution%22%3A%22WEEK%22%2C%22locale%22%3A%22de%22%2C%22comparisonItem%22%3A%5B%7B%22geo%22%3A%7B%7D%2C%22complexKeywordsRestriction%22%3A%7B%22keyword%22%3A%5B%7B%22type%22%3A%22BROAD%22%2C%22value%22%3A%22trump%22%7D%5D%7D%7D%5D%2C%22requestOptions%22%3A%7B%22property%22%3A%22%22%2C%22backend%22%3A%22IZG%22%2C%22category%22%3A0%7D%7D&token=APP6_UEAAAAAXWrEbGVLW-ssfeQvOJgr9938DRgYO1sm&tz=-120
Unquoted :
https://trends.google.com/trends/api/widgetdata/multiline/csv?req={"time":"2018-08-30 2019-08-30","resolution":"WEEK","locale":"de","comparisonItem":[{"geo":{},"complexKeywordsRestriction":{"keyword":[{"type":"BROAD","value":"trump"}]}}],"requestOptions":{"property":"","backend":"IZG","category":0}}&token=APP6_UEAAAAAXWrEbGVLW-ssfeQvOJgr9938DRgYO1sm&tz=-120
I can now easily get this csv into a pandas dataframe. Ideally, I would now just manipulate the url in order to make custom requests and load new data. The problem I have is that I cannot use the same token parameter, since it is somehow generated newly for each individual csv-request. I think that the answer from shaochuancs in Origin of tokens in Google trends API call describes the problem as I am facing it. Can anyone explain how I can request this token which I could then use for the second request ( the actual csv download )? :/
I have a Google sheet that grabs data from a website using the =IMPORTXML function. I also have a Python script that grabs data from the Google sheet. The whole thing is working, but I'm now trying to streamline it. This whole thing started as a manual process in Google Sheets. It's now automated, but it's not pretty.
Two specific questions:
1) What is the best way to scrape a website using Python? I'd like to get this all running in a single script. Would something like Beautiful Soup be a good solution?
2) Currently the query to the google API is coded to run separately each query (it's not a sub function, but I'd like to turn it into one). It essentially copies the quickstart script:
spreadsheetId = 'xxxx'
rangeName = 'xxxx'
result = service.spreadsheets().values().get(spreadsheetId=spreadsheetId,range=rangeName).execute()
values = result.get('values', [])
variable = ''
for row in values:
variable = '%s' % (row[0])
if variable != storedVariable:
print ('Condition not met...')
return;
#Do a thing
My code has various versions of setting a variable, checking it against a stored value, and proceeding if the correct conditions exist. Is there an easier way to parse the values returned from the API call so that it's set as a variable?
BeautifulSoup will work well for scraping data as long as the page is completely static. For most webpages you'll need to be able interact with the page to get to the data you need or iterate through multiple pages. Selenium is great for these situations.
I don't have a better solution for this question. The google-api-python-client library is cumbersome. It looks like gspread used to be a good alternative with more features, but it hasn't been updated for almost a year and seems to have fallen behind the google library.
I wonder if it's possible to create and delete events with Google API in a secondary calendar. I know well how to do it in main calendar so I only ask, how to change calendar_service to read and write to other calendar.
I've tried loging with secondary calendar email, but that's not possible with BadAuthentication Error. The URL was surely correct, becouse it was read by API.
Waiting for your help.
A'm answering my own question so I can finally accept this one. The problem has been solved some time ago.
The most important answer is in this documentation.
Each query can be run with uri as argument. For example "InsertEvent(event, uri)". Uri can be set manually (from google calendar settings) or automatically, as written in post below. Note, that CalendarEventQuery takes only username, not the whole url.
The construction of both goes this way:
user = "abcd1234#group.calendar.google.com"
uri = "http://www.google.com/calendar/feeds/{{ user }}/private/full-noattendees"
What's useful, is that you can run queries with different uri and add/delete events to many different calendars in one script.
Hope someone finds it helpful.
I got the same issue but I found this solution (I do not remember where)
This solution is to extract the secondary calendar user from its src url provided by google
This is probably not the better one but it's a working one
Note:the code is extracted from a real project [some part has been removed] and must be adapted to your particular case and is provide as sample just to have a support for explaination (It will not work as is)
# 1 - Connect using the main user email address in a classical way
cal_client = gdata.calendar.service.CalendarService()
# insert here after connection stuff
# 2 - For each existing calendars
feed = cal_client.GetAllCalendarsFeed():
# a loop the find the calendar by it's title (cal_title)
for a_calendar in feed.entry:
if cal_title in a_calendar.title.text:
cal_user = a_calendar.content.src.split('/')[5].replace('%40','#')
# If you print a_calendar.content.src.split you will see that the url
# contains an email like definition. This is the one to used to work
# with the calendar
Then you just have to replace the default user by the cal_user in the api to work on the secondary calendar.
Replace call is required because google api function are doing internal conversion on special characters like '%'
I hope this will help you.