I'm using the Alpaca trading API and want to export data from a function call into a CSV file.
When I run a call like this:
closed_orders = api.list_orders(
status='closed',
limit=2,
nested=True # show nested multi-leg orders
)
print(closed_orders)
I get back:
[Order({ 'asset_class': 'us_equity',
'asset_id': '8a9-43b6-9b36-662f01e8fadd',
'canceled_at': None,
'client_order_id': 'e38a-b51c-349314bc6e9e',
'created_at': '2020-06-05T16:16:53.307491Z',
'expired_at': None,
'extended_hours': False,
'failed_at': None,
'filled_at': '2020-06-05T16:16:53.329Z',
'filled_avg_price': '7.8701',
'filled_qty': '45',
'id': '8-4888-9c7c-97bf8c2a3a16',
'legs': None,
'limit_price': '7.87',
'order_class': '',
'order_type': 'limit',
'qty': '45',
'replaced_at': None,
'replaced_by': None,
'replaces': None,
'side': 'sell',
'status': 'filled',
'stop_price': None,
'submitted_at': '2020-06-05T16:16:53.293859Z',
'symbol': 'CARS',
'time_in_force': 'day',
'type': 'limit',
'updated_at': '2020-06-08T11:21:51.411547Z'}), Order({ 'asset_class': 'us_equity',
'asset_id': '1aef-42f4-9975-750dbcb3e67d',
'canceled_at': None,
'client_order_id': '2bde-4572-a5d0-bfc32c2bf31a',
'created_at': '2020-06-05T16:16:37.508176Z',
'expired_at': None,
'extended_hours': False,
'failed_at': None,
'filled_at': '2020-06-05T16:16:37.531Z',
'filled_avg_price': '10.8501',
'filled_qty': '26',
'id': '4256-472c-a5de-6ca9d6a21422',
'legs': None,
'limit_price': '10.85',
'order_class': '',
'order_type': 'limit',
'qty': '26',
'replaced_at': None,
'replaced_by': None,
'replaces': None,
'side': 'sell',
'status': 'filled',
'stop_price': None,
'submitted_at': '2020-06-05T16:16:37.494389Z',
'symbol': 'IGT',
'time_in_force': 'day',
'type': 'limit',
'updated_at': '2020-06-08T11:21:51.424963Z'})]
How do I grab this data and print it into a CSV? I tried to write something like this but I get the error "Order" object has no keys. I assumed I'd be able to loop through the API response and write according to the headers; how do I break down/flatten the API response accordingly?
with open('historical_orders.csv', 'w', newline='') as csvfile:
fieldnames = [
'asset_id',
'canceled_at',
'client_order_id',
'created_at',
'expired_at',
'extended_hours',
'failed_at',
'filled_at',
'filled_avg_price',
'filled_qty',
'id',
'legs',
'limit_price',
'order_class',
'order_type',
'qty',
'replaced_at',
'replaced_by',
'replaces',
'side',
'status',
'stop_price',
'submitted_at',
'symbol',
'time_in_force',
'type',
'updated_at'
]
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
for c in closed_orders:
writer.writerow(c)
You can access the __dict__ attribute of the Orders class to get the headers and rows for the CSV file.
with open('historical_orders.csv', 'w', newline='') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=closed_orders[0].__dict__['_raw'].keys())
writer.writeheader()
for order in closed_orders:
writer.writerow(order.__dict__['_raw'])
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 days ago.
Improve this question
I'm using an API from anomali to gather intel list and i wanna ask on how i could run the code so that it would output all the need columns header into an excel file.
So i created a code where i pull out the needed columns to be implemented to the site.
import requests
import json
import pandas as pd
import csv
url = 'https://api.threatstream.com/api/v2/intelligence/?itype=bot_ip'
csv_columns = ['ip','source_created', 'status', 'itype', 'expiration_ts', 'is_editable', 'feed_id', 'update_id',
'value', 'ispublic', 'threat_type', 'workgroups', 'rdns', 'confidence', 'uuid', 'retina_confidence',
'trusted_circle_ids', 'id', 'source', 'owner_organization_id', 'import_session_id', 'source_modified',
'type', 'sort', 'description', 'tags', 'threatscore', 'latitude', 'modified_ts', 'org', 'asn',
'created_ts', 'tlp', 'is_anonymous', 'country', 'source_reported_confidence', 'can_add_public_tags',
'subtype', 'meta', 'resource_uri']
with open("AnomaliThreat.csv","a", newline='') as filecsv:
writer = csv.DictWriter(filecsv, fieldnames=csv_columns)
writer.writeheader()
headers = {
'Accept': 'application/json',
'Authorization': 'apikey testing:wdwfawaf12321rfewawafa'
}
response= requests.get( url=url,headers=headers)
json_Data = json.loads(response.content)
result = json_Data["objects"]
with open("AnomaliThreat.csv","a", newline='')as filecsv:
writer = csv.DictWriter(filecsv,fieldnames=csv_columns)
writer.writerow(result)
If i ran this code, all i got is 'list' no attribute keys, my guess is because inside the response, there's a list inside the list or another string inside the list for example like this
'trusted_circle_ids': [1241412, 212141241]
or this
'tags': [{'id': 'fwafwff', 'name': 'wfwafwawf'},
{'id': '31231ewfw',
'name': 'fwafwafwafaw#gmail.com.wafawfawfds.com'}],
And this is what's inside the response of anomali
[{'source_created': None,
'status': 'inactive',
'itype': 'bot_ip',
'expiration_ts': '',
'ip': '231.24124.1241.412',
'is_editable': False,
'feed_id': 23112231,
'update_id': 231231,
'value': '124124124141224141',
'is_public': False,
'threat_type': 'bot',
'workgroups': [],
'rdns': None,
'confidence': 12,
'uuid': '3123414124124142',
'retina_confidence': 52414,
'trusted_circle_ids': [1241412, 212141241],
'id': fwaffewaewafw1231231,
'source': 'wfawfwaefwadfwa',
'owner_organization_id': 2,
'import_session_id': None,
'source_modified': None,
'type': 'ip',
'sort': [312312424124141241, '1241414214241'],
'description': None,
'tags': [{'id': 'fwafwff', 'name': 'wfwafwawf'},
{'id': '31231ewfw',
'name': 'fwafwafwafaw#gmail.com.wafawfawfds.com'}],
'threatscore': 412,
'latitude': wafefwaf,
'modified_ts': 'wawafwadfd',
'org': 'fawfwafawe',
'asn': 'fwafwa2131231',
'created_ts': '41241241241241',
'tlp': None,
'is_anonymous': False,
'country': 'fwafw',
'source_reported_confidence': 21,
'can_add_public_tags': False,
'longitude': --321412,
'subtype': None,
'meta': {'detail2': 'bi2141412412342424',
'severity': '3123124r3'},
'resource_uri': '/api/v2/intelligence/241fsdfsf241325/'},
{'source_created': None,
'status': 'inactive',
'itype': 'bot_ip',
'expiration_ts': '',
'ip': '231.24124.1241.412',
'is_editable': False,
'feed_id': 23112231,
'update_id': 231231,
'value': '124124124141224141',
'is_public': False,
'threat_type': 'bot',
'workgroups': [],
'rdns': None,
'confidence': 12,
'uuid': '3123414124124142',
'retina_confidence': 52414,
'trusted_circle_ids': [1241412, 212141241],
'id': fwaffewaewafw1231231,
'source': 'wfawfwaefwadfwa',
'owner_organization_id': 2,
'import_session_id': None,
'source_modified': None,
'type': 'ip',
'sort': [312312424124141241, '1241414214241'],
'description': None,
'tags': [{'id': 'fwafwff', 'name': 'wfwafwawf'},
{'id': '31231ewfw',
'name': 'fwafwafwafaw#gmail.com.wafawfawfds.com'}],
'threatscore': 412,
'latitude': wafefwaf,
'modified_ts': 'wawafwadfd',
'org': 'fawfwafawe',
'asn': 'fwafwa2131231',
'created_ts': '41241241241241',
'tlp': None,
'is_anonymous': False,
'country': 'fwafw',
'source_reported_confidence': 21,
'can_add_public_tags': False,
'longitude': --321412,
'subtype': None,
'meta': {'detail2': 'bi2141412412342424',
'severity': '3123124r3'},
'resource_uri': '/api/v2/intelligence/241fsdfsf241325/'}]
I'm open to any suggestions on how to make it so that the results can be inputed into an excel file
Problem Solved!
I needed to add a value to the code, so i added this line
csv_writer = csv.writer(data_file)
count = 0
for res in result:
if count == 0:
header = res.keys()
csv_writer.writerow(header)
count += 1
csv_writer.writerow(res.values())
data_file.close()
You can try doing something like this if i understood correctly,
import requests
import json
import pandas as pd
import csv
url = 'https://api.threatstream.com/api/v2/intelligence/?itype=bot_ip'
csv_columns = ['ip','source_created', 'status', 'itype', 'expiration_ts', 'is_editable', 'feed_id', 'update_id',
'value', 'ispublic', 'threat_type', 'workgroups', 'rdns', 'confidence', 'uuid', 'retina_confidence',
'trusted_circle_ids', 'id', 'source', 'owner_organization_id', 'import_session_id', 'source_modified',
'type', 'sort', 'description', 'tags', 'threatscore', 'latitude', 'modified_ts', 'org', 'asn',
'created_ts', 'tlp', 'is_anonymous', 'country', 'source_reported_confidence', 'can_add_public_tags',
'subtype', 'meta', 'resource_uri']
headers = {
'Accept': 'application/json',
'Authorization': 'apikey testing:wdwfawaf12321rfewawafa'
}
response= requests.get( url=url,headers=headers)
json_Data = json.loads(response.content)
result = json_Data["objects"]
dataframe_1 = pd.Dataframe
for key, value in result.items():
if key in csv_columns:
dataframe_1[key] = value
dataframe_1.to_csv("AnomaliThreat.csv")
something along those lines, so basically iterate through the key, value pairs with in the result, check if the key is in the csv_columns, save that key value pair, finally once all that is done just use the dataframe.to_csv
This question already has answers here:
CSV file written with Python has blank lines between each row
(11 answers)
Closed last year.
I have a below code which is creating a csv file:
import csv
# my data rows as dictionary objects
mydict = [{'branch': 'COE', 'cgpa': '9.0', 'name': 'Nikhil', 'year': '2'},
{'branch': 'COE', 'cgpa': '9.1', 'name': 'Sanchit', 'year': '2'},
{'branch': 'IT', 'cgpa': '9.3', 'name': 'Aditya', 'year': '2'},
{'branch': 'SE', 'cgpa': '9.5', 'name': 'Sagar', 'year': '1'},
{'branch': 'MCE', 'cgpa': '7.8', 'name': 'Prateek', 'year': '3'},
{'branch': 'EP', 'cgpa': '9.1', 'name': 'Sahil', 'year': '2'}]
# field names
fields = ['name', 'branch', 'year', 'cgpa']
# name of csv file
filename = "university_records.csv"
# writing to csv file
with open(filename, 'w') as csvfile:
# creating a csv dict writer object
writer = csv.DictWriter(csvfile, fieldnames=fields)
# writing headers (field names)
writer.writeheader()
# writing data rows
writer.writerows(mydict)
Running the above code is giving below excel sheet
It contains blank rows as well. How can I remove these blank rows. Thanks
You should create a dataframe with your dict, and then just use
pd.to_csv(name_of_dataframe, sep=your_columns_sep)
Adding the newline='' in the with open ... does the trick:
import csv
my_dict = [{'branch': 'COE', 'cgpa': '9.0', 'name': 'Nikhil', 'year': '2'},
{'branch': 'COE', 'cgpa': '9.1', 'name': 'Sanchit', 'year': '2'},
{'branch': 'IT', 'cgpa': '9.3', 'name': 'Aditya', 'year': '2'},
{'branch': 'SE', 'cgpa': '9.5', 'name': 'Sagar', 'year': '1'},
{'branch': 'MCE', 'cgpa': '7.8', 'name': 'Prateek', 'year': '3'},
{'branch': 'EP', 'cgpa': '9.1', 'name': 'Sahil', 'year': '2'}]
fields = ['name', 'branch', 'year', 'cgpa']
filename = "foo_bar.csv"
with open(filename, 'w', newline='') as csv_file:
writer = csv.DictWriter(csv_file, fieldnames=fields)
writer.writeheader()
writer.writerows(my_dict)
Building off my former post here: How to print data from API call into a CSV file
The API call returns this
[Order({ 'asset_class': 'us_equity',
'asset_id': '8a9-43b6-9b36-662f01e8fadd',
'canceled_at': None,
'client_order_id': 'e38a-b51c-349314bc6e9e',
'created_at': '2020-06-05T16:16:53.307491Z',
'expired_at': None,
'extended_hours': False,
'failed_at': None,
'filled_at': '2020-06-05T16:16:53.329Z',
'filled_avg_price': '7.8701',
'filled_qty': '45',
'id': '8-4888-9c7c-97bf8c2a3a16',
'legs': None,
'limit_price': '7.87',
'order_class': '',
'order_type': 'limit',
'qty': '45',
'replaced_at': None,
'replaced_by': None,
'replaces': None,
'side': 'sell',
'status': 'filled',
'stop_price': None,
'submitted_at': '2020-06-05T16:16:53.293859Z',
'symbol': 'CARS',
'time_in_force': 'day',
'type': 'limit',
'updated_at': '2020-06-08T11:21:51.411547Z'}), Order({ 'asset_class': 'us_equity',
'asset_id': '1aef-42f4-9975-750dbcb3e67d',
'canceled_at': None,
'client_order_id': '2bde-4572-a5d0-bfc32c2bf31a',
'created_at': '2020-06-05T16:16:37.508176Z',
'expired_at': None,
'extended_hours': False,
'failed_at': None,
'filled_at': '2020-06-05T16:16:37.531Z',
'filled_avg_price': '10.8501',
'filled_qty': '26',
'id': '4256-472c-a5de-6ca9d6a21422',
'legs': None,
'limit_price': '10.85',
'order_class': '',
'order_type': 'limit',
'qty': '26',
'replaced_at': None,
'replaced_by': None,
'replaces': None,
'side': 'sell',
'status': 'filled',
'stop_price': None,
'submitted_at': '2020-06-05T16:16:37.494389Z',
'symbol': 'IGT',
'time_in_force': 'day',
'type': 'limit',
'updated_at': '2020-06-08T11:21:51.424963Z'})]
I'd like to repeat the exercise of writing to a CSV as linked in my other post but this time only write a subset of the columns from the API into a CSV. My first attempt here was instead of using the keys from the raw dictionary I would specify the fieldnames as a list, but then I'm having trouble accessing only the keys in the dict entry based on the list of filenames that I'm passing in.
with open('historical_orders.csv', 'w', newline='') as csvfile:
fieldnames = ['id', 'created_at', 'filled_at', 'canceled_at', 'replaced_at', 'symbol', 'asset_class', 'qty',
'filled_qty', 'filled_avg_price', 'order_class', 'order_type', 'type',
'side', 'time_in_force', 'limit_price',
'stop_price', 'status', 'extended_hours', 'legs']
writer = csv.DictWriter(csvfile, fieldnames)
writer.writeheader()
for order in closed_orders:
writer.writerow(order.__dict__['_raw'].fieldnames)
I get AttributeError: 'dict' object has no attribute 'fieldnames'.
Additionally I'd like to add 1 more column that strips out the funky "created_at" value to a date and time. So instead of "created_at" = "'2020-06-05T16:16:53.307491Z'", I'd like to create a column date "'2020-06-05" and time "'16:16:53". I was thinking I could do this by adding a loop in each write row to write one field a time, but wasn't sure if there was a better way.
Can someone help me with these 2 issues?
I am using Python. I have placed a trade to buy a stock. The trade filled. I am getting the order data back to see what price it filled at. I am using that price to generate a sell price.
The issue is, I don't know how to sift through the data and extract the fill price. I don't know enough about coding to even know what to call it so here is the data. I am trying to extract that 29.42 in filled_avg_price.
Order({ 'asset_class': 'us_equity',
'asset_id': 'b49cfcfc-b0f7-4bf0-aff8-a33ffe6f0073',
'canceled_at': None,
'client_order_id': 'ENPH4',
'created_at': '2020-04-01T19:27:23.641068Z',
'expired_at': None,
'extended_hours': False,
'failed_at': None,
'filled_at': '2020-04-01T19:27:23.768187Z',
'filled_avg_price': '29.42',
'filled_qty': '1',
'id': 'a58e92a2-35d5-4d6e-9fcb-03c4c1ee8c65',
'legs': None,
'limit_price': '32',
'order_class': '',
'order_type': 'limit',
'qty': '1',
'replaced_at': None,
'replaced_by': None,
'replaces': None,
'side': 'buy',
'status': 'filled',
'stop_price': None,
'submitted_at': '2020-04-01T19:27:23.184461Z',
'symbol': 'ENPH',
'time_in_force': 'day',
'type': 'limit',
'updated_at': '2020-04-01T19:27:23.783736Z'})
I am using import alpaca_trade_api If you need to install it to check things out.
Here is the Order class from there
class Order(Entity):
def __init__(self, raw):
super().__init__(raw)
try:
self.legs = [Order(o) for o in self.legs]
except Exception:
# No order legs existed
pass
I am currently using glom to parse through a JSON API response, which returns, among other things, a list of dictionaries, with a list of dictionaries inside it. The problem I'm having is getting glom to access the correct dictionary entry.
Example JSON:
{'answeredAt': '2019-08-23T21:11:04Z',
'direction': 'Inbound',
'disposition': 'Answered',
'duration': 110867,
'endedAt': '2019-08-23T21:12:55Z',
'from': {'connectedAt': '2019-08-23T21:11:04Z',
'departmentName': None,
'deviceType': None,
'disconnectedAt': '2019-08-23T21:12:55Z',
'name': 'blah',
'number': '1234567890',
'number_e164': '1234567890',
'serviceId': None,
'userId': None},
'initialQueueName': 'blah',
'joinedLinkedIds': [],
'legs': [{'departmentName': 'default',
'deviceType': 'Unknown',
'legType': 'Dial',
'menuName': None,
'menuOption': None,
'menuPrompt': None,
'number': '1234567890',
'optionAction': None,
'optionArg': None,
'queueName': None,
'serviceId': 327727,
'timestamp': '2019-08-23T21:11:04Z',
'userId': None},
{'departmentName': 'default',
'deviceType': 'Unknown',
'legType': 'Answer',
'menuName': None,
'menuOption': None,
'menuPrompt': None,
'number': '1234567890',
'optionAction': None,
'optionArg': None,
'queueName': None,
'serviceId': 327727,
'timestamp': '2019-08-23T21:11:04Z',
'userId': None},
{'departmentName': None,
'deviceType': None,
'legType': 'EnterIVR',
'menuName': 'blah',
'menuOption': None,
'menuPrompt': None,
'number': None,
'optionAction': None,
'optionArg': None,
'queueName': None,
'serviceId': None,
'timestamp': '2019-08-23T21:11:05Z',
'userId': None},
{'departmentName': None,
'deviceType': None,
'legType': 'IVRSchedule',
'menuName': 'Day',
'menuOption': None,
'menuPrompt': None,
'number': None,
'optionAction': None,
'optionArg': None,
'queueName': None,
'serviceId': None,
'timestamp': '2019-08-23T21:11:06Z',
'userId': None},
{'departmentName': None,
'deviceType': None,
'legType': 'EnterQueue',
'menuName': None,
'menuOption': None,
'menuPrompt': None,
'number': None,
'optionAction': None,
'optionArg': None,
'queueName': 'blah',
'serviceId': None,
'timestamp': '2019-08-23T21:11:15Z',
'userId': None},
{'departmentName': None,
'deviceType': None,
'legType': 'Hangup',
'menuName': None,
'menuOption': None,
'menuPrompt': None,
'number': 'blah',
'optionAction': None,
'optionArg': None,
'queueName': None,
'serviceId': None,
'timestamp': '2019-08-23T21:12:55Z',
'userId': None}],
'linkedId': 'some unique key',
'startedAt': '2019-08-23T21:11:04Z',
'to': {'connectedAt': '2019-08-23T21:11:04Z',
'departmentName': 'default',
'deviceType': 'Unknown',
'disconnectedAt': '2019-08-23T21:12:55Z',
'name': None,
'number': '1234567890',
'number_e164': '1234567890',
'serviceId': 327727,
'userId': None},
'version': {'label': None, 'major': 4, 'minor': 2, 'point': 1}},
The information I'm trying to get at is in 'legs', where 'legType' == 'Dial' or 'EnterIVR'. I need 'number' from the 'Dial' leg, and 'menuName' from the 'EnterIVR' leg. I can get it, for instance, to list back all the different legTypes, but not the data specifically from those.
This is where I'm at currently:
with open('callstest.csv',mode='w') as calls:
data_writer = csv.writer(calls, delimiter = ',')
data_writer.writerow(['LinkedID','Number','Queue','Client'])
target = response_json['calls']
glomtemp = {}
for item in target:
spec = {
'Linked ID':'linkedId',
#this returns the number I need only in certain cases,
#so I need 'number' from the 'Dial' legType
'Number': ('to', 'number')
'Queue': 'initialQueueName',
'Client': #need help here, should be 'menuName' from
#'EnterIVR' legType
}
glomtemp = glom(item,spec)
#print(glomtemp)
data_writer.writerow([glomtemp['Linked ID'],glomtemp['Number'],glomtemp['Queue']])
Right now I can get them to fall back with Coalesce to "None", but that's not what I'm looking for.
Any suggestions on how I should spec this to get the info out of those 2 legs for 'Number' and 'Client'?
If I understand correctly, you want to filter out certain entries that don't fit the supported legType. You're definitely onto something with the Coalesce, and I think the key here is glom's Check specifier type, combined with the SKIP singleton. I had to tweak your current spec a bit to match the example data, but this runs:
from glom import glom, Check, Coalesce, SKIP
LEG_SPEC = {'Client': Coalesce('menuName', default=''),
'Number': Coalesce('to.number', default=''),
'Linked ID': 'serviceId',
'Queue': 'queueName'}
entries_spec = ('legs',
[Check('legType', one_of=('Dial', 'EnterIVR'), default=SKIP)],
[LEG_SPEC])
pprint(glom(target, entries_spec))
# prints:
# [{'Client': None, 'Linked ID': 327727, 'Number': '', 'Queue': None},
# {'Client': 'blah', 'Linked ID': None, 'Number': '', 'Queue': None}]
Not sure if that was exactly what you were hoping to see, but the pattern is there. I think you want Nones (or '') for those other fields because the csv you're writing is going to want to put something in those columns.
There are other ways of doing filtered iteration using glom, too. The snippets page has a short section, complete with examples.