Relative zipline newbie here (FYI I have been learning mainly via the book TRADING EVOLVED plus trial and error). I have a few data bundles that extract from my mysql db of historical daily price data and work fine - as in I can run algos on it which transact orders (of any type) as expected.
However, if I fabricate a symbol and create a bunch of price history for it, the algos will analyze the data no problem but not fulfill my orders for it. E.g. I could invent the symbol ZYXW and give it all the data associated with my real stock data, but no order happens. I can submit the order but it will sit there forever until the algo ends. At first I wondered if this was to do with the "data.can_trade" check I was doing, but it is the same without that call.
Any clues? I am probably misunderstanding some basic principle. Please enlighten me!
Related
I work for a power company, and have been tasked with building a database. I have a pretty beginner/intermediate understanding level of python, and can fuddle decently with MSSQL. They have procured Azure for this project, and I am completely lost of how to start this task.
Here is one of the sources of data that I want to scrape every minute.
http://ets.aeso.ca/ets_web/docroot/tradingPage.html - this is a complete overview of the Alberta power market in real time.
Ideally, I would want to be able to scrape this data and other sources, and then modify it to fit into in a certain format and push it onto the SQL server.
Do I need virtual machines that are just looping over python scripts? Or do I need managed instances? This data also then needs to be able to be queried right after it is scraped. Eventually this data may feed machine learning algorithms (I don't know jack about that either but I have been told it should play friendly with that type of enviornment).
Just looking to see if anyone has any insight in how you would approach this, and can tell me what I clearly don't know and haven't thought of. Any insight is truly appreciated.
Thanks!
I am just starting out to learning Python and have taken some online courses in my free time.
I am trying to find the data source for this website, making a daily count of departures from the airport, and eventually building a flights vs date plot.
Have spent two weeks investigating the page source, but am unable to find the json source. Would a kind soul please show me where the json source is? Thanks!
https://www.changiairport.com/en/flights/departures.html
https://www.changiairport.com/cag-web/flights/departures?lang=en&callback=JSON_CALLBACK&date=today
There you go. That will give you the flight schedule for today. The date parameter can probably be other things too, but I don't know what the options are. Any normal get request should work, it seems publicly accessible.
You just need to right click and go to "inspect" then hit the "network" tab and then just browse through the different requests.
Just a note:
Just for the record, this is called scraping and it's often in a legal
gray area where, so long as you aren't using it too extensively or
making a profit off of it you probably won't get in any trouble, but
just make sure you have permission from the company if you plan to
make a lot of calls to an open API like this. It's usually against their terms of service, but as an unenforced clause that they will only use if you become a nuisance.
I'm working on a python webscraper that pulls data from a car advertising site. I got the scraping part all done with beatifoulsoup but I've ran into many difficulties trying to store and modify it. I would really appreciate some advice on this part since I'm a lacking knowledge on this part.
So here is what I want to do:
Scrape the data each hour (done).
Store scraped data as a dictionary in a .JSON file (done).
Everytime the ad_link not found in the scraped_data.json set it to dict['Status'] = 'Inactive' (done).
If a cars price changes , print notification + add old price to dictionary. On this part I came across many challenges with the .JSON way.
I've kept using 2 .json files and comparing them to each other (scraped_data_temp , permanent_data.json) but I think this is by far not the best method.
What would you guys suggest? How should I do this? .
What would be the best way to approach manipulating this kind of data ? (Databases maybe? - got no experince with them but I'm eager to learn) and what would be a good way to represent this kind of data, pygal?
Thank you very much.
If you have larger data, I would definitely recommend using some kind of DB. If you don't have the need to use DB server, you can use sqlite. I have used it in the past to save bigger data locally. You can use sqlalchemy in python to interact with DB-s.
As for displaying data, I tend to use matplotlib. It's extremely flexible, has extensive documentation and examples, so you can adjust data to you linking, graphs, charts, etc.
I'm assuming that you are using python3.
I'm so sorry for the vague question here, but I'm hoping an SPSS expert will be able to help me out here. We have some surveys that are done via SPSS, from which we extract data for an internal report. Right now the process is very cumbersome and requires going to the SPSS Data Collection Interviewer Server Administration page and manually exporting data from two different projects (which takes hours at a time!). We then take that data, massage it, and upload it to another database that drives the internal report.
My question is, does anyone out there know how to automate this process? Is there a SQL Server database behind the SPSS data? Where does the .mdd file come in to play? Can my team (who is well-versed in extracting data from various sources) tap into the SQL Server database behind SPSS to get our data? Or do we need some sort of Python script and plugin?
If I'm missing information that would be helpful in answering the question, please let me know. I'm happy to provide it; I just don't know what to provide.
Thanks so much.
As mentioned by other contributors, there are a few ways to achieve this. The simplest I can suggest is using the DMS (data management script) and windows scheduler. Ideally you should follow below steps.
Prerequisite:
1. You should have access to the server running IBM Data collection
2. Basic knowledge of windows task scheduler
3. Knowledge of DMS scripting
Approach:
1. Create a new DMS script from the template
2. If you want to perform only data extract / transformation, you only need input and output data source
3. In the input data source, create/build the connection string pointing to your survey on IBM Data collection server. Use the data source as SQL
4. In the select query: use "Select * from VDATA" if you want to export all variables
5. Set the output data connection string by selecting the output data format as SPSS (if you want to export it in SPSS)
6. run the script manually and see if the SPSS export is what is expected
7. Create batch file using text editor (save with .bat extension). Add below lines
cd "C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Scripts\Data Management\DMS"
Call DMSRun YOURDMSFILENAME.dms
Then add a line to copy (using XCOPY) the data / files extracted to the location where you want to further process it.
Save the file and open windows scheduler to schedule the execution of this batch file for data extraction.
If you want to do any further processing, you create an mrs or dms file and add to the batch file.
Hope this helps!
There are a number of different ways you can accomplish easing this task and even automate it completely. However, if you are not an IBM SPSS Data Collection expert and don't have access to somebody who is or have the time to become one, I'd suggest getting in touch with some of the consultants who offer services on the platform. Internally IBM doesn't have many skilled SPSS resources available, so they rely heavily on external partners to do services on a lot of their products. This goes for IBM SPSS Data Collection in particular, but is also largely true for SPSS Statistics.
As noted by previous contributors there is an approach using Python for data cleaning, merging and other transformations and then loading that output into your report database. For maintenance reasons I'd probably not suggest this approach. Though you are most likely able to automate the export of data from SPSS Data Collection to a sav file with a simple SPSS Syntax (and an SPSS add-on data component), it is extremely error prone when upgrading either SPSS Statistics or SPSS Data Collection.
From a best practice standpoint, you ought to use the SPSS Data Collection Data Management module. It is very flexible and hardly requires any maintenance on upgrades, because you are working within the same data model framework (e.g. survey metadata, survey versions, labels etc. is handled implicitly) right until you load your transformed data into your reporting database.
Ideally the approach would be to build the mentioned SPSS Data Collection Data Management script and trigger it at the end of each completed interview. In this way your reporting will be close to real-time (you can make it actual real-time by triggering the DM script during the interview using the interview script events - just a FYI).
All scripting on the SPSS Data Collection platform including Data Management scripting is very VB-like, so for most people knowing VB, it is very easy to get started and it is documented very well in the SPSS Data Collection DDL. There you'll also be able to find examples of extracting survey data from SPSS Data Collection surveys (as well as reading and writing data to/from other databases, files etc.). There are also many examples of data manipulation and transformation.
Lastly, to answer your specific questions:
Yes, there is always an MS SQL Server behind SPSS Data Collection -
no exceptions. However, generally speaking the data model is way to
complex to read out data directly from it. If you have a look in it,
you'll quickly realize this.
The MDD file (short for Meta Data Document) is containing all survey meta
data including data source specifications, version history etc.
Without it you'll not be able to make anything of the survey data in
the database, which is the main reason I'd suggest to stay within the
SPSS Data Collection platform for as large part of your data handling
as possible. However, it is indeed just a readable XML file.
Note that the SPSS Data Collection Data Management Module requires a separate license and if the scripting needed is large or complex, you'd probably want base professional too, if that's not what you already use for developing the questionnaires and handling the surveys.
Hope that helps.
This isn't as clean as working directly with whatever database is holding the data, but you could do something with an exported data set:
There may or may not be a way for you to write and run an export script from inside your Admin panel or whatever. If not, you could write a simple Python script using Selenium WebDriver which logs into your admin panel and exports all data to a *.sav data file.
Then you can use the Python SPSS extensions to write your analysis scripts. Note that these scripts have to run on a machine that has a copy of SPSS installed.
Once you have your data and analysis results accessible to Python, you should be able to easily write that to your other database.
I am at the absolutely basic level on Python 3 (everything i currently know is from TheMonkeyLords) and my main focus is to integrate Python 3 with XBRLware so that i can extract financial information from the SEC EDGAR database with accuracy and reliability.
How can i use the xbrlware frame with Python 3? I have absolutely no idea how you can use a frame with Python 3....
Any suggestions on what should I learn or code for me to study, clues etc would be great help!
Thank you
Don't do it. Based on personal experience, it is very difficult to extract useful financial data from XBRL. XBRLWare does work, but there is a lot of work to do afterwards to extract the data into something useful.
XBRL has over 100 definitions of "revenue". Each industry reports differently. Each company makes 100s of filings and you have to string together data from different reports. It's an incredibly frustrating process.
I have used XBRLWare as a Ruby Gem on Windows. (It is no longer supported.) It does "work". It downloads and formats the reports nicely, but it operates as a viewer. Most filings contain two quarters of data. (Probably not the quarters you want either.)
You can use the XBRL viewer on the SEC's website to accomplish the same thing. Or you can go to the company's 10-Qs.
Also, XBRL uses CIK codes for the companies. As far as I know, the SEC doesn't have a central database to match CIK codes to ticker symbols (if you can believe that!). So it can be frustrating to find the companies you want to download.
If you want to download all the XBRL filings, I've been told its like 6TB a month.
You can't pull historical financial data from XBRL either. You have to string two quarters at a time together. So, pull every IBM filing (XBRL is 3 yrs old) and string together all the 10-Qs. XBRL is only three years old for the large accelerated filers, so historical data is limited.
There is a reason why Wall Street still charges $25k/year for financial data. XBRL is very difficult to use and difficult to extract data from.
You could try: XBRLcloud.com or findynamics.com