Query Influx database - python

I have this code to query a influx DB, but it is not working at all. Here is the python code.
import os
from influxdb import InfluxDBClient
username = u'{}'.format(os.environ['INFLUXDB_USERNAME'])
password = u'{}'.format(os.environ['INFLUXDB_PASSWORD'])
client = InfluxDBClient(host='127.0.0.1', port=8086, database='data',
username=username, password=password)
result = client.query("SELECT P_askbid_midprice1 FROM 'DCIX_OB' WHERE time > '2018-01-01';")
I got the following error, but it's still unclear how to fix the code above. If I query directly from influxdb with bash with SELECT P_askbid_midprice1 FROM 'DCIX_OB' WHERE time > '2018-01-01'; it worked perfectly fine.
Press ENTER or type command to continue
Traceback (most recent call last):
File "graph_influxdb.py", line 11, in <module>
result = client.query("SELECT P_askbid_midprice1 FROM 'DCIX_OB' WHERE time > '2018-01-01';")
File "/home/ubuntu/.local/lib/python3.5/site-packages/influxdb/client.py", line 394, in query
expected_response_code=expected_response_code
File "/home/ubuntu/.local/lib/python3.5/site-packages/influxdb/client.py", line 271, in request
raise InfluxDBClientError(response.content, response.status_code)
influxdb.exceptions.InfluxDBClientError: 400: {"error":"error parsing query: found DCIX_OB, expected identifier at line 1, char 31"}
How can I fix it?

result = client.query("SELECT P_askbid_midprice1 FROM DCIX_OB WHERE time > '2018-01-01'")
this should work

you might be able to use Pinform which is some kind of ORM/OSTM (Object time series mapping) for InfluxDB.
It can help with designing the schema and building normal or aggregation queries.
cli.get_fields_as_series(OHLC,
field_aggregations={'close': [AggregationMode.MEAN]},
tags={'symbol': 'AAPL'},
time_range=(start_datetime, end_datetime),
group_by_time_interval='10d')
Disclaimer: I am the author of this library

Related

Snscrape sntwitter Error while using query and TwitterSearchScraper: "4 requests to ... failed giving up"

Up until this morning I successfully used snscrape to scrape twitter tweets via python.
The code looks like this:
import snscrape.modules.twitter as sntwitter
query = "from:annewilltalk"
for i,tweet in enumerate(sntwitter.TwitterSearchScraper(query).get_items()):
# do stuff
But this morning without any changes I got the error:
Traceback (most recent call last):
File "twitter.py", line 568, in <module>
Scraper = TwitterScraper()
File "twitter.py", line 66, in __init__
self.get_tweets_talkshow(username = "annewilltalk")
File "twitter.py", line 271, in get_tweets_talkshow
for i,tweet in enumerate(sntwitter.TwitterSearchScraper(query).get_items()):
File "/home/pi/.local/lib/python3.8/site-packages/snscrape/modules/twitter.py", line 1455, in get_items
for obj in self._iter_api_data('https://api.twitter.com/2/search/adaptive.json', _TwitterAPIType.V2, params, paginationParams, cursor = self._cursor):
File "/home/pi/.local/lib/python3.8/site-packages/snscrape/modules/twitter.py", line 721, in _iter_api_data
obj = self._get_api_data(endpoint, apiType, reqParams)
File "/home/pi/.local/lib/python3.8/site-packages/snscrape/modules/twitter.py", line 691, in _get_api_data
r = self._get(endpoint, params = params, headers = self._apiHeaders, responseOkCallback = self._check_api_response)
File "/home/pi/.local/lib/python3.8/site-packages/snscrape/base.py", line 221, in _get
return self._request('GET', *args, **kwargs)
File "/home/pi/.local/lib/python3.8/site-packages/snscrape/base.py", line 217, in _request
raise ScraperException(msg)
snscrape.base.ScraperException: 4 requests to https://api.twitter.com/2/search/adaptive.json?include_profile_interstitial_type=1&include_blocking=1&include_blocked_by=1&include_followed_by=1&include_want_retweets=1&include_mute_edge=1&include_can_dm=1&include_can_media_tag=1&include_ext_has_nft_avatar=1&skip_status=1&cards_platform=Web-12&include_cards=1&include_ext_alt_text=true&include_quote_count=true&include_reply_count=1&tweet_mode=extended&include_entities=true&include_user_entities=true&include_ext_media_color=true&include_ext_media_availability=true&include_ext_sensitive_media_warning=true&include_ext_trusted_friends_metadata=true&send_error_codes=true&simple_quoted_tweet=true&q=from%3Aannewilltalk&tweet_search_mode=live&count=20&query_source=spelling_expansion_revert_click&pc=1&spelling_corrections=1&ext=mediaStats%2ChighlightedLabel%2ChasNftAvatar%2CvoiceInfo%2Cenrichments%2CsuperFollowMetadata%2CunmentionInfo failed, giving up.
I found online, that the URL encoding must not exceed a length of 500 and the one from the error message is about 800 long. Could that be the problem? Why did that change overnight?
How can I fix that?
the same problem here. It was working just fine and sudenly stoped. I get the same error code.
This is the code I used:
import snscrape.modules.twitter as sntwitter
import pandas as pd
# Creating list to append tweet data to
tweets_list2 = []
# Using TwitterSearchScraper to scrape data and append tweets to list
for i,tweet in enumerate(sntwitter.TwitterSearchScraper('xxxx since:2022-06-01 until:2022-06-30').get_items()):
if i>100000:
break
tweets_list2.append([tweet.date, tweet.id, tweet.content, tweet.user.username])
# Creating a dataframe from the tweets list above
tweets_df2 = pd.DataFrame(tweets_list2, columns=['Datetime', 'Tweet Id', 'Text', 'Username'])
print(tweets_df2)
tweets_df2.to_csv('xxxxx.csv')
Twitter changed something that broke the snscrape requests several days back. It looks like they just reverted back to the configuration that worked about 15 minutes ago, though. If you are using the latest snscrape version, everything should be working for you now!
Here is a thread with more info:
https://github.com/JustAnotherArchivist/snscrape/issues/671
There was a lot of discussion in the snscrape github, but none of the answers worked for me.
I now have a workaround using the shell-client. Executing the queries via terminal works for some reason.
So I execute the shell command via subprocess, save the result in a json and load that json back to python.
Example shell command: snscrape --jsonl --max-results 10 twitter-search "from:maybritillner" > test_twitter.json

Insert or update to a salesforce by python

I want to insert or update to a salesforce record.
by python but I get the following error.
The item of "SeqNo__c" is "202102-123" from that I specify the id of the record I want to change.
it seems that it cannot find it at SeqNo__c.
If anyone knows, please let me know.
This code
from simple_salesforce import Salesforce
from datetime import datetime
import os
import json
import gzip
from pytz import timezone
import requests
#strDate = datetime.now().strftime("%Y%m%d")
#strTime = datetime.now().strftime("%H%M%S")
SALESFORCE_USERNAME = '123#gmail.com'
PASSWORD = '12345'
SECURITY_TOKEN = '12345'
# Authentication settings
sf = Salesforce(username=SALESFORCE_USERNAME,
password=PASSWORD,
security_token=SECURITY_TOKEN)
#Try bulk but also error
#data = [{'SeqNo__c': '202102-123', 'NewNO__c': '1'}]
#sf.bulk.Contact.update(data)
sf.Contact.upsert('SeqNo__c/202102-123',{'NewNO__c': '1'})
error code
Traceback (most recent call last):
File "c:\Users\test\Documents\salesforce\sf_check.py", line 27, in <module>
sf.Contact.upsert('SeqNo__c/202102-123',{'NewNO__c': '1'})
File "C:\Users\test\AppData\Local\Programs\Python\Python39\lib\site-packages\simple_salesforce\api.py", line 694, in upsert
result = self._call_salesforce(
File "C:\Users\test\AppData\Local\Programs\Python\Python39\lib\site-packages\simple_salesforce\api.py", line 800, in _call_salesforce
exception_handler(result, self.name)
File "C:\Users\test\AppData\Local\Programs\Python\Python39\lib\site-packages\simple_salesforce\util.py", line 68, in exception_handler
raise exc_cls(result.url, result.status_code, name, response_content)
simple_salesforce.exceptions.SalesforceResourceNotFound: Resource Contact Not Found. Response content: [{'errorCode': 'NOT_FOUND', 'message': 'Provided external ID field does not exist or is not accessible: SeqNo__c'}]
PS C:\Users\test\Documents\salesforce>
Is the Contact.SeqNo__c field marked as external id? There's a checkbox in field definition that must be ticked. And your user needs to have a profile permission to at least see the field

running a simple pdblp code to extract BBG data

I am currently logged on to my BBG anywhere (web login) on my Mac. So first question is would I still be able to extract data using tia (as I am not actually on my terminal)
import pdblp
con = pdblp.BCon(debug=True, port=8194, timeout=5000)
con.start()
I got this error
pdblp.pdblp:WARNING:Message Received:
SessionStartupFailure = {
reason = {
source = "Session"
category = "IO_ERROR"
errorCode = 9
description = "Connection failed"
}
}
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Users/prasadkamath/anaconda2/envs/Pk36/lib/python3.6/site-packages/pdblp/pdblp.py", line 147, in start
raise ConnectionError('Could not start blpapi.Session')
ConnectionError: Could not start blpapi.Session
I am assuming that I need to be on the terminal to be able to extract data, but wanted to confirm that.
This is a duplicate of this issue here on SO. It is not an issue with pdblp per se, but with blpapi not finding a connection. You mention that you are logged in via the web, which only allows you to use the terminal (or Excel add-in) within the browser, but not outside of it, since this way of accessing Bloomberg lacks a data feed and an API. More details and alternatives can be found here.

Can't find SalesForce Object

I'm having some trouble with SalesForce, I've never used it before so I'm not entirely sure what is going wrong here. I am using the simple_salesforce python module. I have successfully pulled data from SalesForce standard objects, but this custom object is giving me trouble. My query is
result = sf.query("Select Name from Call_Records__c")
which produces this error:
Traceback (most recent call last):
File "simple.py", line 15, in <module>
result = sf.query("Select Name from Call_Records__c")
File "/usr/local/lib/python2.7/dist-packages/simple_salesforce/api.py", line 276, in query
_exception_handler(result)
File "/usr/local/lib/python2.7/dist-packages/simple_salesforce/api.py", line 634, in _exception_handler
raise exc_cls(result.url, result.status_code, name, response_content)
simple_salesforce.api.SalesforceMalformedRequest: Malformed request https://sandbox.company.com/services/data/v29.0/query/?q=Select+Name+from+Call_Records__c. Response content: [{u'errorCode': u'INVALID_TYPE', u'message': u"\nSelect Name from Call_Records__c\n ^\nERROR at Row:1:Column:18\nsObject type 'Call_Records__c' is not supported. If you are attempting to use a custom object, be sure to append the '__c' after the entity name.
Please reference your WSDL or the describe call for the appropriate names."}]
I've tried it with and without the __c for both the table name and the field name, still can't figure this out. Anything blatantly wrong?
Make sure your result is Call_Records__c/CallRecords__c
Call_Records__c result = sf.query("Select Name from Call_Records__c")
Or
CallRecords__c result = sf.query("Select Name from CallRecords__c")
Try using -
result = sf.query("Select Name from CallRecords__c")

Updating a Spreadsheet in Google Docs

I have a python script that takes a generated CSV and uploads it to Google Docs. It can upload it just fine, put I cannot seem to get it to replace the data, it returns an error I cannot find reference to.
Le Code:
import gdata.auth
import gdata.docs
import gdata.docs.service
import gdata.docs.data
import gdata.docs.client
email = 'admin#domain.com'
CONSUMER_KEY='domain.com'
CONSUMER_SECRET='blah54545blah'
ms_client = gdata.docs.client.DocsClient('Domain_Doc_Upload')
ms_client.auth_token = gdata.gauth.TwoLeggedOAuthHmacToken(CONSUMER_KEY, CONSUMER_SECRET, email)
url = 'http://docs.google.com/feeds/documents/private/full/sd01blahgarbage'
ms = gdata.data.MediaSource(file_path="C:\\people.csv", content_type='text/csv')
csv_entry2 = ms_client.Update(url, ms)
It returns:
Traceback (most recent call last):
File "so_test.py", line 19, in <module>
csv_entry2 = ms_client.Update(ms, url)
File "build\bdist.win-amd64\egg\gdata\client.py", line 717, in update
AttributeError: 'MediaSource' object has no attribute 'to_string'
I cannot find anything about the 'to_string' attribute, so I am lost on the trace. ANy help, much appreciated.
I took a look at the docs and it looks like the Update method takes (entry, ms) where entry needs to be a gdata.docs.data.DocsEntry object. You should be able to get the DocsEntry object by getting a feed from your client.
feed = client.GetDocList()

Categories

Resources