Difference between import and __import__ in Python - python

I was taking a look at some commit of a project, and I see the following change in a file:
- import dataFile
+ dataFile = __import__(dataFile)
The coder replaced import dataFile by dataFile = __import__(dataFile).
What exactly is the difference between them?

import dataFile
translates roughly to
dataFile = __import__('dataFile')
Apparently the developer decided that they wanted to use strings to identify the modules they wanted to import. This is presumably so they could dynamically change what module they wanted to import ...

Related

Is there any feasible solution to read WOT battle results .dat files?

I am new here to try to solve one of my interesting questions in World of Tanks. I heard that every battle data is reserved in the client's disk in the Wargaming.net folder because I want to make a batch of data analysis for our clan's battle performances.
image
It is said that these .dat files are a kind of json files, so I tried to use a couple of lines of Python code to read but failed.
import json
f = open('ex.dat', 'r', encoding='unicode_escape')
content = f.read()
a = json.loads(content)
print(type(a))
print(a)
f.close()
The code is very simple and obviously fails to make it. Well, could anyone tell me the truth about that?
Added on Feb. 9th, 2022
After I tried another set of codes via Jupyter Notebook, it seems like something can be shown from the .dat files
import struct
import numpy as np
import matplotlib.pyplot as plt
import io
with open('C:/Users/xukun/Desktop/br/ex.dat', 'rb') as f:
fbuff = io.BufferedReader(f)
N = len(fbuff.read())
print('byte length: ', N)
with open('C:/Users/xukun/Desktop/br/ex.dat', 'rb') as f:
data =struct.unpack('b'*N, f.read(1*N))
The result is a set of tuple but I have no idea how to deal with it now.
Here's how you can parse some parts of it.
import pickle
import zlib
file = '4402905758116487.dat'
cache_file = open(file, 'rb') # This can be improved to not keep the file opened.
# Converting pickle items from python2 to python3 you need to use the "bytes" encoding or "latin1".
legacyBattleResultVersion, brAllDataRaw = pickle.load(cache_file, encoding='bytes', errors='ignore')
arenaUniqueID, brAccount, brVehicleRaw, brOtherDataRaw = brAllDataRaw
# The data stored inside the pickled file will be a compressed pickle again.
vehicle_data = pickle.loads(zlib.decompress(brVehicleRaw), encoding='latin1')
account_data = pickle.loads(zlib.decompress(brAccount), encoding='latin1')
brCommon, brPlayersInfo, brPlayersVehicle, brPlayersResult = pickle.loads(zlib.decompress(brOtherDataRaw), encoding='latin1')
# Lastly you can print all of these and see a lot of data inside.
The response contains a mixture of more binary files as well as some data captured from the replays.
This is not a complete solution but it's a decent start to parsing these files.
First you can look at the replay file itself in a text editor. But it won't show the code at the beginning of the file that has to be cleaned out. Then there is a ton of info that you have to read in and figure out but it is the stats for each player in the game. THEN it comes to the part that has to do with the actual replay. You don't need that stuff.
You can grab the player IDs and tank IDs from WoT developer area API if you want.
After loading the pickle files like gabzo mentioned, you will see that it is simply a list of values and without knowing what the value is referring to, its hard to make sense of it. The identifiers for the values can be extracted from your game installation:
import zipfile
WOT_PKG_PATH = "Your/Game/Path/res/packages/scripts.pkg"
BATTLE_RESULTS_PATH = "scripts/common/battle_results/"
archive = zipfile.ZipFile(WOT_PKG_PATH, 'r')
for file in archive.namelist():
if file.startswith(BATTLE_RESULTS_PATH):
archive.extract(file)
You can then decompile the python files(uncompyle6) and then go through the code to see the identifiers for the values.
One thing to note is that the list of values for the main pickle objects (like brAccount from gabzo's code) always has a checksum as the first value. You can use this to check whether you have the right order and the correct identifiers for the values. The way these checksums are generated can be seen in the decompiled python files.
I have been tackling this problem for some time (albeit in Rust): https://github.com/dacite/wot-battle-results-parser/tree/main/datfile_parser.

Best way to handle data import paths independent of the PC I use

I am currently looking into working from different PCs on the same ownCloud data, doing file imports such as:
```data = pd.read_csv(r"C:\Users\User\ownCloud\Sample Information\folder1\folder2\data.csv")```
However, only "/Sample Information/folder1/folder2/data.csv" is independent of the PC that I use.
The Jupyter notebook would be somewhere like this: r"C:\Users\User\ownCloud\Simulations\folder1\folder2\data_model.ipynb"
I tried stuff like the following, but it wouldn't work:
```data = pd.read_csv(".../Sample Information/folder1/folder2/data.csv")```
Is there a concise way of doing these imports or do I need to use the os module and some combination of os.path.dirname( *... repeat for amount of subfolder until ownCloud is reached...* os.path.dirname(os.getcwd)) ?
Thanks for your help!
EDIT:
After some pointers I now use either of these solutions with v1 similar to this and v2 similar to this:
import os
ipynb_path = os.getcwd()
n_th_folder = 4
### Version 1
split = ipynb_path.split(os.sep)[:-n_th_folder]
ownCloud_path = (os.sep).join(split)
### Version 2
ownCloud_path = ipynb_path
for _ in range(n_th_folder):
ownCloud_path = os.path.dirname(ownCloud_path)
You could replace username in the path with its value by string interpolation.
import os
data = pd.read_csv(r"C:\Users\{}\ownCloud\Sample Information\folder1\folder2\data.csv".format(os.getlogin())

Python 3 use a code from a file

I have two Python files. In my main file I work with a openpyxl module. In my second file I have many string lines with concatenating using Excel file cells, for example:
'/ip address=' + sheet['D'+ row].value + '\n'
and many others. But there is a problem, if I import that file to a main file using:
from file2 import *
I get many errors about undefined names like:
NameError: name 'sheet' is not defined
And it is really defined only in my main file, like:
wb = openpyxl.load_workbook(filename='clients.xlsx')
sheet = wb.get_sheet_by_name('Page1')
How can I import everything from my file2 and get it work?
As far as I can wrap my head around it, import only imports functions. execfile(*path*) should work for you in your case.
There are some more ways to import python into python, which you might want to check out.

Import csv Python with Spyder

I am trying to import a csv file into Python but it doesn't seem to work unless I use the Import Data icon.
I've never used Python before so apologies is I am doing something obviously wrong. I use R and I am trying to replicate the same tasks I do in R in Python.
Here is some sample code:
import pandas as pd
import os as os
Main_Path = "C:/Users/fan0ia/Documents/Python_Files"
Area = "Pricing"
Project = "Elasticity"
Path = os.path.join(R_Files, Business_Area, Project)
os.chdir(Path)
#Read in the data
Seasons = pd.read_csv("seasons.csv")
Dep_Sec_Key = pd.read_csv("DepSecKey.csv")
These files import without any issues but when I execute the following:
UOM = pd.read_csv("FINAL_UOM.csv")
Nothing shows in the variable explorer panel and I get this in the IPython console:
In [3]: UOM = pd.read_csv("FINAL_UOM.csv")
If I use the Import Data icon and use the wizard selecting DataFrame on the preview tab it works fine.
The same file imports into R with the same kind of command so I don't know what I am doing wrong? Is there any way to see what code was generated by the wizard so I can compare it to mine?
Turns out the data had imported, it just wasn't showing in the variable explorer

File path name for NumPy's loadtxt()

I was wondering if somebody had some information on how to load a CSV file using NumPy's loadtxt(). For some reason it claims that there is no such file or directory, when clearly there is. I've even copy/pasted the full path (with and without the leading / for root), but to no avail.
from numpy import *
FH = loadtxt("/Users/groenera/Desktop/file.csv")
or
from numpy import *
FH = loadtxt("Users/groenera/Desktop/file.csv")
The documentation for loadtxt is very unhelpful about this.
You might have forgot the double slash, "//". Some machines require this.
So instead of
FH = loadtxt("/Users/groenera/Desktop/file.csv")
do this:
FH = loadtxt("C:\\Users\\groenera\\Desktop\\file.csv")
This is probably not a loadtxt problem. Try simply
f = open("/Users/groenera/Desktop/file.csv")
to make sure it is loadtxt's fault. Also, try using a Unicode string:
f = open(u"/Users/groenera/Desktop/file.csv")
I am using PyCharm, Python 3.5.2.
Right-click on your project and open a file with 'planet.csv' and paste your text.
Add a header to each column.
Code:
import pandas as pd
data = pd.read_csv('planet.csv',sep = "\n")
print (data)

Categories

Resources