How do I export modal data from the program OpenModal - python

I am asking this question for Moritz:
I managed to run the programme (ubuntu-linux-tested branch) and upload a .uff-file. When I perform the analysis it also works decently but how do I extract the modal parameters (eigen frequencies, damping and mode shape). When I click on export and select the analysis results it saves an almost empty .uff-file (see .txt file)
Thank you in advance!!

Currently modal vectors are not yet exported in uff you would have to
open saved *.mdd file with python (mdd file is actually pickled data).
Within data you will find geometry data, modal vectors, etc.
Here is the python code to do this (see also: https://github.com/openmodal/OpenModal/issues/31):
import OpenModal as OM # In order to reload modules to new location when unpickling files
import pickle
import sys
import pandas as pd
sys.modules['openModal'] = OM # In order to reload modules to new location when unpickling files
## see: https://stackoverflow.com/questions/13398462/unpickling-python-objects-with-a-changed-module-path
file_name=r'beam_accel.mdd'
f = open(file_name, 'rb')
data = pickle.load(f)
writer = pd.ExcelWriter('output.xlsx')
data[0].tables['analysis_index'].astype(str).to_excel(writer, 'index')
data[0].tables['analysis_values'].astype(str).to_excel(writer, 'values')
writer.save()

Related

Is there any feasible solution to read WOT battle results .dat files?

I am new here to try to solve one of my interesting questions in World of Tanks. I heard that every battle data is reserved in the client's disk in the Wargaming.net folder because I want to make a batch of data analysis for our clan's battle performances.
image
It is said that these .dat files are a kind of json files, so I tried to use a couple of lines of Python code to read but failed.
import json
f = open('ex.dat', 'r', encoding='unicode_escape')
content = f.read()
a = json.loads(content)
print(type(a))
print(a)
f.close()
The code is very simple and obviously fails to make it. Well, could anyone tell me the truth about that?
Added on Feb. 9th, 2022
After I tried another set of codes via Jupyter Notebook, it seems like something can be shown from the .dat files
import struct
import numpy as np
import matplotlib.pyplot as plt
import io
with open('C:/Users/xukun/Desktop/br/ex.dat', 'rb') as f:
fbuff = io.BufferedReader(f)
N = len(fbuff.read())
print('byte length: ', N)
with open('C:/Users/xukun/Desktop/br/ex.dat', 'rb') as f:
data =struct.unpack('b'*N, f.read(1*N))
The result is a set of tuple but I have no idea how to deal with it now.
Here's how you can parse some parts of it.
import pickle
import zlib
file = '4402905758116487.dat'
cache_file = open(file, 'rb') # This can be improved to not keep the file opened.
# Converting pickle items from python2 to python3 you need to use the "bytes" encoding or "latin1".
legacyBattleResultVersion, brAllDataRaw = pickle.load(cache_file, encoding='bytes', errors='ignore')
arenaUniqueID, brAccount, brVehicleRaw, brOtherDataRaw = brAllDataRaw
# The data stored inside the pickled file will be a compressed pickle again.
vehicle_data = pickle.loads(zlib.decompress(brVehicleRaw), encoding='latin1')
account_data = pickle.loads(zlib.decompress(brAccount), encoding='latin1')
brCommon, brPlayersInfo, brPlayersVehicle, brPlayersResult = pickle.loads(zlib.decompress(brOtherDataRaw), encoding='latin1')
# Lastly you can print all of these and see a lot of data inside.
The response contains a mixture of more binary files as well as some data captured from the replays.
This is not a complete solution but it's a decent start to parsing these files.
First you can look at the replay file itself in a text editor. But it won't show the code at the beginning of the file that has to be cleaned out. Then there is a ton of info that you have to read in and figure out but it is the stats for each player in the game. THEN it comes to the part that has to do with the actual replay. You don't need that stuff.
You can grab the player IDs and tank IDs from WoT developer area API if you want.
After loading the pickle files like gabzo mentioned, you will see that it is simply a list of values and without knowing what the value is referring to, its hard to make sense of it. The identifiers for the values can be extracted from your game installation:
import zipfile
WOT_PKG_PATH = "Your/Game/Path/res/packages/scripts.pkg"
BATTLE_RESULTS_PATH = "scripts/common/battle_results/"
archive = zipfile.ZipFile(WOT_PKG_PATH, 'r')
for file in archive.namelist():
if file.startswith(BATTLE_RESULTS_PATH):
archive.extract(file)
You can then decompile the python files(uncompyle6) and then go through the code to see the identifiers for the values.
One thing to note is that the list of values for the main pickle objects (like brAccount from gabzo's code) always has a checksum as the first value. You can use this to check whether you have the right order and the correct identifiers for the values. The way these checksums are generated can be seen in the decompiled python files.
I have been tackling this problem for some time (albeit in Rust): https://github.com/dacite/wot-battle-results-parser/tree/main/datfile_parser.

How to link HDF5 files generated with Pandas?

Assume we have a folder with HDF5-files generated by pandas.to_hdf. I would like to create one master.h5 file that contains external links to all the DataFrames.
According to the documentation of h5py, the standard way to do this is
myfile = h5py.File('master.h5','w')
myfile['ext link'] = h5py.ExternalLink("some_sub_file.h5", "/path/to/resource")
But files generated by pandas.to_hdf contain not just datasets, but h5py.Groups. How exactly would you then set up the external link to work?
Links can point to any object in the HDF5 data structure (datasets or groups). The file is a special form of a group; called the root group and referenced with '/'. So, to link to a file, use: h5py.ExternalLink(filename,'/').
You didn't say if you want a link for each dataframe/dataset in each file, or links for each file. It's simpler to create links to the file root groups. If you create individual links to the datasets, be sure you assign unique names.
There are 2 answers that demonstrate each method. The questions were not specifically about h5py.ExternalLink(), but my answers to each question used external links. See these answers:
HDF5 Attributes of External Links: Creates links to root group in multiple files. (each file only has 1 dataset...but your process would be identical.)
I/O Issues in Loading Several Large H5PY Files : Creates links to multiple datasets in multiple files. (Requires unique dataset names to work "as-is". Can be modified if names are not unique.)
I modified the code from the second answer (70089964) to show how to create 3 external links from the master file to the root group in 3 files (where each file has 5 datasets).
Code to create 3 example files:
import h5py
import numpy as np
for fcnt in range(3):
fname = f'file_{fcnt+1}.h5'
with h5py.File(fname,'w') as h5fw:
for dscnt in range(1,6,1):
arr = np.random.random(10).reshape(5,2)
h5fw.create_dataset(f'data_{fcnt*10+dscnt:02}',data=arr*dscnt)
Code to create links from the master file to the 3 files:
import h5py
fnames = ['file_1.h5','file_2.h5','file_3.h5']
with h5py.File(f'master_{len(fnames)}_links}.h5','w') as h5fw:
for fname in fnames:
with h5py.File(fname,'r') as h5fr:
h5fw[fname] = h5py.ExternalLink(fname,'/')
Additional research on Pandas and HDF5 links revealed an interesting discovery: there are limitations with links (you can create them in Pandas, but Pandas can't access the linked data). In other words, the links are there, and work fine with HDFView, h5py and PyTables. Reference these GitHub issues:
Pandas hdf functions should support the hdf5 ExternalLink
functionality when reading/writing - Issue #6019
Presence of softlink in HDF5 file breaks HDFStore.keys() - Issue
#20523
Status for both appears to be Open. My tests confirm previously reported errors.
The code below shows how to create both link types. It also shows the error message you will get when you try to access the linked data. (Error message is: KeyError: 'you cannot get attributes from this 'NoAttrs' instance. This is due to a HDF5 limitation attribute restrictions on links. a A HDFStore node has some required attributes. Result is the 'NoAttrs' message when Pandas tries to read the attributes.
import pandas as pd
df1 = pd.DataFrame({ "a": [1,2,3,4], "b": [11,12,13,14] })
print(df1.to_string())
# Create file 1 with simple dataframe
f1 = "test_1.hdf"
with pd.HDFStore(f1, mode="w") as hdf1:
hdf1.put("/key1", df1)
# Create file 2 with external link
f2 = "test_extlink.hdf"
with pd.HDFStore(f2, mode="w") as hdf2:
hdf2._handle.create_external_link(hdf2._handle.root, "extlink_key1", f"{f1}:/key1")
print("Successful external link write")
with pd.HDFStore(f2, mode="r") as hdf2:
print(hdf2.keys()) # Notice that [] (no keys) is printed
# following lines will trigger the 'NoAttrs' error message
# df2test = pd.read_hdf(f2,key="extlink_key1")
# print(df2test.to_string())
print("End external link read")
# Create file 3 with simple dataframe and symbolic (soft) link
f3 = "test_symlink.hdf"
with pd.HDFStore(f3, mode="w") as hdf3:
hdf3.put("/key1", df1)
hdf3._handle.create_soft_link(hdf3._handle.root, "symlink_key1", "/key1")
print("Successful symbolic link write")
with pd.HDFStore(f3, mode="r") as hdf3:
print(hdf3.keys()) # Notice that only ['key1'] is printed
# following lines will trigger the 'NoAttrs' error message
# df3test = pd.read_hdf(f3,key="symlink_key1")
# print(df3test.to_string())
print("End symbolic link read")

Saving images to a folder in the same directory Python

I have written this code that takes a URL and downloads the image. I was wondering how can I save the images to a specific folder in the same directory?
For example, I have a folder named images in the same directory. I wanted to save all the images to that folder.
Below is my code:
import requests
import csv
with open('try.csv', 'r') as csv_file:
csv_reader = csv.reader(csv_file)
for rows in csv_reader:
image_resp = requests.get(rows[0])
with open(rows[1], 'wb') as image_downloader:
image_downloader.write(image_resp.content)
Looking for this?
with open(os.path.join("images", rows[1]), 'wb') as fd:
for chunk in r.iter_content(chunk_size=128):
fd.write(chunk)
See official docs here: os.path.join
Requests specific stuff from https://2.python-requests.org/en/master/user/quickstart/#raw-response-content
PS: You might want to use a connection pool and/or multiprocessing instead of the for rows in csv_reader, in order to have many concurrent requests.
Saving an image can be achieved by using .save() method with Pillow library’s Image module in Python.
Ex.
from PIL import Image
file = "C://Users/ABC/MyPic.jpg" # Selects a jpg file 'MyPic'
img = Image.open(file)
img.save('Desktop/MyPics/new_img.jpg') # Saves it to MyPics folder as 'new_img'
Image.save(fp, format=None, **params)[source]:
Saves this image under the given filename. If no format is specified, the format to use is determined from the filename extension, if possible.
Keyword options can be used to provide additional instructions to the writer. If a writer doesn’t recognise an option, it is silently ignored. The available options are described in the image format documentation for each writer.
You can use a file object instead of a filename. In this case, you must always specify the format. The file object must implement the seek, tell, and write methods, and be opened in binary mode.

DidThe existing group or dataset be cleaned when opening and writing h5 file using h5py?

Some data need to be written to HDF5 file in different steps, and the sample code is posted below. The problem I meet is the existing h5 group and dataset were cleaned when a new step, including opening and writing, runs again.
import h5py
import numpy as np
a=r"F:\HY1A1B\cd.h5"
#first open and write
b=h5py.File(a, 'w')
zeroPixelCounts = np.zeros((5,10))
QC_Attribute = b.create_group("QC Attributes")
QC_Attribute.create_dataset("Zero Pixel Counts",(5,10),data=zeroPixelCounts)
b.close()
#second open and write
b=h5py.File(a, 'w')
QC_Attributex = b.create_group("QC Attributes xxxx")
QC_Attributex.create_dataset("Zero Pixel Counts",(5,10),data=zeroPixelCounts)
b.close()
#problem:the existing data in first open and write processing were cleaned
I think mode "w" will always create a new HDF5 file so the second time you must open in read/write/create mode ("a", for append):
#second open and write
b=h5py.File(a, 'a')
Worked for me:
>>> list(b.keys())
['QC Attributes', 'QC Attributes xxxx']

python load OpenDap to NetcdfFile

I am opening netcdf data from an opendap server (a subset of the data) using an URL. When I open it the data is (as far as I can see) not actually loaded until the variable is requested. I would like to save the data to a file on disk, how would I do this?
I currently have:
import numpy as np
import netCDF4 as NC
url = u'http://etc/etc/hourly?varname[0:1:10][0:1:30]'
set = NC.Dataset(url) # I think data is not yet loaded here, only the "layout"
varData = set.variables['varname'][:,:] # I think data is loaded here
# now i want to save this data to a file (for example test.nc), set.close() obviously wont work
Hope someone can help, thanks!
If you can use xarray, this should work as:
import xarray as xr
url = u'http://etc/etc/hourly?varname[0:1:10][0:1:30]'
ds = xr.open_dataset(url, engine='netcdf4') # or engine='pydap'
ds.to_netcdf('test.nc')
The xarray documentation has another example of how you could do this.
It's quite simple; create a new NetCDF file, and copy whatever you want :) Luckily this can be automated for a large part, in copying the correct dimensions, NetCDF attributes, ... from the input file. I quickly coded this example, the input file is also a local file, but if the reading with OPenDAP already works, it should work in a similar way.
import netCDF4 as nc4
# Open input file in read (r), and output file in write (w) mode:
nc_in = nc4.Dataset('drycblles.default.0000000.nc', 'r')
nc_out = nc4.Dataset('local_copy.nc', 'w')
# For simplicity; copy all dimensions (with correct size) to output file
for dim in nc_in.dimensions:
nc_out.createDimension(dim, nc_in.dimensions[dim].size)
# List of variables to copy (they have to be in nc_in...):
# If you want all vaiables, this could be replaced with nc_in.variables
vars_out = ['z', 'zh', 't', 'th', 'thgrad']
for var in vars_out:
# Create variable in new file:
var_in = nc_in.variables[var]
var_out = nc_out.createVariable(var, datatype=var_in.dtype, dimensions=var_in.dimensions)
# Copy NetCDF attributes:
for attr in var_in.ncattrs():
var_out.setncattr(attr, var_in.getncattr(attr))
# Copy data:
var_out[:] = var_in[:]
nc_out.close()
Hope it helps, if not let me know.

Categories

Resources