Import SQL server Management studio backup from an .ipynb file - python

I have a doubt regarding an ipynb file, it turns out that they send me a database to replicate the structure, they use SQL server Management studio, but I don't know how to import it. I thought it was a simple python script, which could create a SQL database , then Install anaconda, use %%sql statements to recreate it,
Until I realized that they could be imported in SSMS, but there is something that I am not doing well to import it correctly, I understand that it is a problem of correctly parsing the file,
I appreciate any help, thanks!
Install extensions in visual code, anaconda and the necessary libraries for handling SQL in Python, but it all boils down to correctly importing the file created in SSMS.

The ipynb is a notebook that will contain scripts to be executed against a database or create the database and objects as well.
What you are using in SSMS is a tool to import data into tables - these are not the same thing.
As mentioned by #Squirrel, SSMS does not support notebooks, BUT Azure Data Studio does support notebooks. I think that the notebook was created using Azure Data Studio (which will be installed along with SSM on your computer provided you have a recent version of SSMS.
Note that Azure Data Studio is only the name of the tool - it is not restricted to connecting to databases in Azure or running in Azure so you can use it for local or on-premises databases as well.
When you open Azure Data Studio, click on the button for Noptebooks and then the file icon to browse and open your notebook as shown
You will still likely have to set up your connection but that is a similar experience to SSMS.

I would suggest you to follow the below steps:
Open the notebook file in Jupyter notebook and copy all the cell contents as given below:
How to copy multiple input cells in Jupyter Notebook
Copy the content to a single .SQL file.
In the Management Studio, open a new query window and open the file, created in step no. 2 and you can run the sql statements.
Note: Please review the SQL file once to see if everything is falling in place. You might need to add GO statements between batches. Also, it is recommended to put semicolon at the end of statements, to make them running without issues.

Related

How do I create a standalone Jupyter Lab server using Pyinstaller or similar?

I would like to create a self-contained, .exe file that launches a JupyterLab server as an IDE on a physical server which doesn't have Python installed itself.
The idea is to deploy it as part of an ETL workflow tool, so that it can be used to view notebooks that will contain the ETL steps in a relatively easily digestible format (the notebooks will be used as pipelines via papermill and scrapbook - not really relevant here).
While I can use Pyinstaller to bundle JupyterLab as a package, there isn't a way to launch it on the Pythonless server (that I can see), and I can't figure out a way to do it using Python code alone.
Is it possible to package JupyterLab this way so that I can run the .exe on the server and then connect to 127.0.0.1:8888 on the server to view a notebook?
I have tried using the link below as a starting point, but I think I'm missing something as no server seems to start using this code alone, and I'm not sure how I would execute this via a tornado server etc.:
https://gist.github.com/bollwyvl/bd56b58ba0a078534272043327c52bd1
I would really appreciate any ideas, help, or somebody to tell my why this idea is impossible madness!
Thanks!
Phil.
P.S. I should add that Docker isn't an option here :( I've done this before using Docker and it's extremely easy.

Refresh All Data on Excel Workbook in Sharepoint Using Python

To start I managed to successfully run pywin32 locally where it opened the Excel workbooks and refreshed the SQL Query then saved and close them.
I had to download those workbooks locally from Sharepoint and have them sync to apply the changes using one drive.
My Question is would this be possible to do within Sharepoint itself ? Have a python script scheduled on a server and have the process occur there in the backend through a command.
I use this program called Alteryx where I can have batch files execute scripts and maybe I could use an API of some sort to accomplish this on a scheduled basis since thats the only server I have access to.
I have tried looking on this site and other sources but I can't find a post where it would reference this specifically.
I use Jupyter Notebooks to write my scripts and Alteryx to build a workflow with those scripts but I can use other IDEs if I need to.

How can I read a local file from an R or Python script in Azure Machine Learning Studio?

I need to read a csv file, which is saved in my local computer, from code within an "Execute R/Python Script" in an experiment of Azure Machine Learning Studio. I don't have to upload the data as usually, i.e. from Datasets -> New -> Load from local file or with an Import Data module. I must do it with code. In principle this is not possible, neither from an experiment nor from a notebook, and in fact I always got error. But I'm confused because the documentation about Execute Python Script module says (among other things):
Limitations
The Execute Python Script currently has the following limitations:
Sandboxed execution. The Python runtime is currently sandboxed and, as a result, does not allow access to the network or to the local file system in a persistent manner. All files saved locally are isolated and deleted once the module finishes. The Python code cannot access most directories on the machine it runs on, the exception being the current directory and its subdirectories.
According to the highlighted text, it should be possible to access and load a file from current directory, using for instance the pandas function read_csv. But actually no. There is some trick to accomplish this?
Thanks.
You need to remember that Azure ML Studio is an online tool, and it's not running any code on your local machine.
All the work is being done in the cloud, including running the Execute Python Script, and this is what the text you've highlighted refers to: the directories and subdirectories of the cloud machine running your machine learning experiment, and not your own, local, computer.

Running an exe file on the azure

I am working on an azure web app and inside the web app, I use python code to run an exe file. The webapp recieves certain inputs (numbers) from the user and stores those inputs in in a text file. Afterwards, an exe file would run and read the inputs and generate another text file, called "results". The problem is that although the code works fine on my local computer, as soos as I put it on azure, the exe file does not get triggered by the following line of code:
subprocess.call('process.exe',cwd = case_directory.path, shell= True)
I even tried running the exe file on Azure manually from the Visual Studio Team Services (was Visual Studio Online) by "running from Console" option. It just did not do anything. I'd appreciate if anyone can help me.
Have you looked at using a WebJob to host\run your executable from? A WebJob can be virtually any kind of script or win executable. There are a number of ways to trigger your WebJob. You also get a lot of buit in monitoring and logging for free as well, via the Kudu interface.
#F.K I searched some information which may be helpful for you, please see below.
Accroding to the python document for subprocess module, Using shell=True can be a security hazard. Please see the warning under Frequently Used Arguments for details.
There is a comment in the article which gave a direction for the issue, please see the screenshot below.
However, normally, the recommended way to satisfy your needs is using Azure Queue & Blob Storage & Azure WebJobs to save the input file into a storage queue, and handling the files got from queue and save the result files into blob storage by a continuous webjob.

How to Open Schema.sql in the Blog Demo of Tornado?

I am able to execute the chat and similar basic Python projects with
the command prompt, though I do not know yet how to execute a project
with an SQL database.
Can somebody suggest me a way to open the SQL file schema.sql so I can
have a look at the blog demo?
I am using Python 2.7 with the recent version of Tornado on Windows 7
p.s.
I do understand now that the SQL file is just text with SQL commands and I simply could copy and paste it, though I also see that the CREATE DATABASE command is commented out in the file so I would have to add that one too.
I am wondering which way would be good to achieve this?
I can tell by the blog.py file (when I open it) that the database connection happens to a MySQL database, thus I would not be able to use SQLite or similar.
Though I have XAMPP installed would that work? Remember that XAMPP runs on Apache while the blog demo runs on the Tornado server. Would this constellation work out properly?
If you have XAMPP running, the easiest for you would be to go in Php My Admin (http://localhost/phpmyadmin). Then, in the SQL tab, copy-paste the SQL file you want and press OK.
Just run the Tornado server then, and it should work fine.

Categories

Resources