How to access DVC-controlled files from Oracle? - python

I have been storing my large files in CLOBs within Oracle, but I am thinking of storing my large files in a shared drive, then having a column in Oracle contain pointers to the files. This would use DVC.
When I do this,
(a) are the paths in Oracle paths that point to the files in my shared drive, as in, the actual files themselves?
(b) or do the paths in Oracle point somehow to the DVC metafile?
Any insight would help me out!
Thanks :)
Justin
EDIT to provide more clarity:
I checked here (https://dvc.org/doc/api-reference/open), and it helped, but I'm not fully there yet ...
I want to pull a file from a remote dvc repository using python (which I have connected to the Oracle database). So, if we can make that work, I think I will be good. But, I am confused. If I specify 'remote' below, then how do I name the file (e.g., 'activity.log') when the remote files are all encoded?
with dvc.api.open(
'activity.log',
repo='location/of/dvc/project',
remote='my-s3-bucket'
) as fd:
for line in fd:
match = re.search(r'user=(\w+)', line)
# ... Process users activity log
(NOTE: For testing purposes, my "remote" DVC directory is just another folder on my MacBook.)
I feel like I'm missing a key concept about getting remote files ...
I hope that adds more clarity. Any help figuring out remote file access is appreciated! :)
Justin
EDIT to get insights on 'rev' parameter:
Before my question, some background/my setup:
(a) I have a repo on my MacBook called 'basics'.
(b) I copied into 'basics' a directory of 501 files (called 'surface_files') that I subsequently pushed to a remote storage folder called 'gss'. After the push, 'gss' contains 220 hash directories.
The steps I used to get here are as follows:
> cd ~/Desktop/Work/basics
> git init
> dvc init
> dvc add ~/Desktop/Work/basics/surface_files
> git add .gitignore surface_files.dvc
> git commit -m "Add raw data"
> dvc remote add -d remote_storage ~/Desktop/Work/gss
> git commit .dvc/config -m "Configure remote storage"
> dvc push
> rm -rf ./.dvc/cache
> rm -rf ./surface_files
Next, I ran the following Python code to take one of my surface files, named surface_100141.dat, and used dvc.api.get_url() to get the corresponding remote storage file name. I then copied this remote storage file into my desktop under the file's original name, i.e., surface_100141.dat.
The code that does all this is as follows, but FIRST, MY QUESTION --- when I run the code as it is shown below, no problems; but when I uncomment the 'rev=' line, it fails. I am not sure why this is happening. I used git log and cat .git/refs/heads/master to make sure that I was getting the right hash. WHY IS THIS FAILING? That is my question.
(In full disclosure, my git knowledge is not too strong yet. I'm getting there, but it's still a work in progress! :))
import dvc.api
import os.path
from os import path
import shutil
filename = 'surface_100141.dat' # This file name would be stored in my Oracle database
home_dir = os.path.expanduser('~')+'/' # This simply expanding '~' into '/Users/ricej/'
resource_url = dvc.api.get_url(
path=f'surface_files/{filename}', # Works when 'surface_files.dvc' exists, even when 'surface_files' directory and .dvc/cache do not
repo=f'{home_dir}Desktop/Work/basics',
# rev='5c92710e68c045d75865fa24f1b56a0a486a8a45', # Commit hash, found using 'git log' or 'cat .git/refs/heads/master'
remote='remote_storage')
resource_url = home_dir+resource_url
print(f'Remote file: {resource_url}')
new_dir = f'{home_dir}Desktop/' # Will copy fetched file to desktop, for demonstration
new_file = new_dir+filename
print(f'Remote file copy: {new_file}')
if path.exists(new_file):
os.remove(new_file)
dest = shutil.copy(resource_url, new_file) # Check your desktop after this to see remote file copy

I'm not 100% sure that I understand the question (it would be great to expand it a bit on the actual use case you are trying to solve with this database), but I can share a few thoughts.
When we talk about DVC, I think you need to specify a few things to identify the file/directory:
Git commit + path (actual path like data/data/xml). Commit (or to be precise any Git revision) is needed to identify the version of the data file.
Or path in the DVC storage (/mnt/shared/storage/00/198493ef2343ao ...) + actual name of this file. This way you would be saving info that .dvc` files have.
I would say that second way is not recommended since to some extent it's an implementation detail - how does DVC store files internally. The public interface to DVC organized data storage is its repository URL + commit + file name.
Edit (example):
with dvc.api.open(
'activity.log',
repo='location/of/dvc/project',
remote='my-s3-bucket'
) as fd:
for line in fd:
match = re.search(r'user=(\w+)', line)
# ... Process users activity log
location/of/dvc/project this path must point to an actual Git repo. This repo should have a .dvc or dvc.lock file that has activity.log name in it + its hash in the remote storage:
outs:
- md5: a304afb96060aad90176268345e10355
path: activity.log
By reading this Git repo and analyzing let's say activity.log.dvc DVC will be able to create the right path s3://my-bucket/storage/a3/04afb96060aad90176268345e10355
remote='my-s3-bucket' argument is optional. By default it will use the one that is defined in the repo itself.
Let's take another real example:
with dvc.api.open(
'get-started/data.xml',
repo='https://github.com/iterative/dataset-registry'
) as fd:
for line in fd:
match = re.search(r'user=(\w+)', line)
# ... Process users activity log
In the https://github.com/iterative/dataset-registry you could find the .dvc file that is enough for DVC to create a path to the file by also analyzing its config
https://remote.dvc.org/dataset-registry/a3/04afb96060aad90176268345e10355
you could run wget on this file to download it

Related

Python find broken symlinks caused by Git

I'm currently working on a script that is supposed to go through a cloned Git repo and remove any existing symlinks (broken or not) before recreating them for the user to ease with project initialization.
The solved problems:
Finding symbolic links (broken or unbroken) in Python is very well documented.
GitHub/GitLab breaking symlinks upon downloading/cloning/updating repositories is also very well documented as is how to fix this problem. Tl;dr: Git will download symlinks within the repo as plain text files (with no extension) containing only the symlink path if certain config flags are not set properly.
The unsolved problem:
My problem is that developers may download this repo without realizing the issues with Git, and end up with the symbolic links "checked out as small plain files that contain the link text" which is completely undetectable (as far as I can tell) when parsing the cloned files/directories (via existing base libraries). Running os.stat() on one of these broken symlink files returns information as though it were a normal text file:
os.stat_result(st_mode=33206, st_ino=14073748835637675, st_dev=2149440935, st_nlink=1, st_uid=0, st_gid=0, st_size=42, st_atime=1671662717, st_mtime=1671662699, st_ctime=1671662699)
The st_mode information only indicates that it is a normal text file- 100666 (the first 3 digits are the file type and the last 3 are the UNIX-style permissions). It should show up as 120000.
The os.path.islink() function only ever returns False.
THE CONFUSING PART is that when I run git ls-files -s ./microservices/service/symlink_file it gives 1200000 as the mode bits which, according to the documentation, indicates that this file is a symlink. However I cannot figure out how to see this information from within Python.
I've tried a bunch of things to try and find and delete existing symlinks. Here's the base method that just finds symlink directories and then deletes them:
def clearsymlinks(cwd: str = ""):
"""
Crawls starting from root directory and deletes all symlinks
within the directory tree.
"""
if not cwd:
cwd = os.getcwd()
print(f"Clearing symlinks - starting from {cwd}")
# Create a queue
cwd_dirs: list[str] = [cwd]
while len(cwd_dirs) > 0:
processing_dir: str = cwd_dirs.pop()
# print(f"Processing {processing_dir}") # Only when debugging - else it's too much output spam
for child_dir in os.listdir(processing_dir):
child_dir_path = os.path.join(processing_dir, child_dir)
# check if current item is a directory
if not os.path.isdir(child_dir_path):
if os.path.islink(child_dir_path):
print(f"-- Found symbolic link file {child_dir_path} - removing.\n")
os.remove(child_dir_path)
# skip the dir checking
continue
# Check if the child dir is a symlink
if os.path.islink(child_dir_path):
print(f"-- Found symlink directory {child_dir_path} - removing.")
os.remove(child_dir_path)
else:
# Add the child dir to the queue
cwd_dirs.append(child_dir_path)
After deleting symlinks I run os.symlink(symlink_src, symlink_dst) and generally run into the following error:
Traceback (most recent call last):
File "C:\Users\me\my_repo\remakesymlinks.py", line 123, in main
os.symlink(symlink_src, symlink_dst)
FileExistsError: [WinError 183] Cannot create a file when that file already exists: 'C:\\Users\\me\\my_repo\\SharedPy' -> 'C:\\Users\\me\\my_repo\\microservices\\service\\symlink_file'
A workaround to specifically this error (inside the create symlink method) is:
try:
os.symlink(symlink_src, symlink_dst)
except FileExistsError:
os.remove(symlink_dst)
os.symlink(symlink_src, symlink_dst)
But this is not ideal because it doesn't prevent a huge list of defunct/broken symlinks from piling up in the directory. I should be able to find any symlinks (working, broken, non-existent, etc.) and then delete them.
I have a list of the symlinks that should be created by my script, but extracting the list of targets from this list is also a workaround that also causes a 'symlink-leak'. Below is how I'm currently finding the broken symlink purely for testing purposes.
if not os.path.isdir(child_dir_path):
if os.path.basename(child_dir_path) in [s.symlink_install_to for s in dirs_to_process]:
print(f"-- Found symlink file {child_dir_path} - removing.")
os.remove(child_dir_path)
# skip the dir checking
continue
A rudimentary solution where I filter for only 'text/plain' files with exactly 1 line (since checking anything else is pointless) and trying to determine whether that single line is just a file path (this seems excessive though):
# Check if Git downloaded the symlink as a plain text file (undetectable broken symlink)
if not os.path.isdir(child_dir_path):
try:
if magic.Magic(mime = True, uncompress = True).from_file(child_dir_path) == 'text/plain':
with open(child_dir_path, 'r') as file:
for i, line in enumerate(file):
if i >= 1:
raise StopIteration
else:
# this function is directly copied from https://stackoverflow.com/a/34102855/8705841
if is_path_exists_or_creatable(line):
print(f"-- Found broken Git link file '{child_dir_path}' - removing.")
print(f"\tContents: \"{line}\"")
# os.remove(child_dir_path)
raise StopIteration
except (StopIteration, UnicodeDecodeError, magic.MagicException):
file.close()
continue
Clearly this solution would require a lot of refactoring (9 indents is pretty ridiculous) if it's the only viable option. Problem with this solution (currently) is that it also tries to delete any single-line files with a string that does not break pathname requirements- i.e. import _virtualenv, some random test string, project-name. Filtering out those files with spaces, or files without slashes, could potentially work but this still feels like chasing edge cases instead of solving the actual problem.
I could potentially rewrite this script in Bash wherein I could, in addition to existing symlink search/destroy code, parse the results from git ls-files -s ... and then delete any files with the 120000 tag. But is this method feasible and/or reliable? There should be a way to do this from within Python since Bash isn't going to run on every OS.
Note: file names have been redacted/changed after copy-paste for privacy, they shouldn't matter anyways since they are generated dynamically by the path searching functions

Python API for Github, getting contents in specific directory for specific branch not returning all content

Using the PyGithub API, I am attempting to retrieve all contents from a specific folder from a specific branch of a repository hosted with Github. I can't share the actual repository or specifics regarding the data, but the code I am using is this:
import github
import json
import requests
import base64
from collections import namedtuple
Package = namedtuple('Package', 'name version')
# Parameters
gh_token = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
header = {"Authorization": f"token {gh_token}"}
gh_hostname = 'devtopia.xxx.com'
gh = github.Github(base_url=f'https://{gh_hostname}/api/v3', login_or_token = gh_token)
repo_name = "xxxxxxxxx/SupportFiles"
conda_meta = "xxxxxxx/bin/Python/envs/xxxxxx-xx/conda-meta"
repo = gh.get_repo(repo_name)
def parse_conda_meta(branch):
package_list = []
meta_contents = repo.get_contents(conda_meta, ref=branch) #<< Returns less files than expected for
# a specified branch "xxx/release/3.2.0",
# returns expected number of files for
# "master" branch.
for i, pkg in enumerate(meta_contents):
if ".json" in pkg.name: # filter for JSON files
print(i, pkg.name)
# Need to use GitHub Data API (REST) blobs instead of easier
# `github` with `pkg.decoded_content` here because that method
# only works with files <= 1MB whereas Data API allows for
# reading files <= 100MB.
resp = requests.get(f"https://devtopia.xxxx.com/api/v3/repos/xxxxxxxxx/SupportFiles/git/blobs/{pkg.sha}?ref={branch}", headers=header)
pkg_cont = json.loads(base64.b64decode(json.loads(resp.content)["content"]))
package_list.append(Package(pkg_cont['name'], pkg_cont['version']))
else:
print('>>', i, pkg.name)
return package_list
if __name__ == "__main__":
pkgs = parse_conda_meta("xxx/release/3.2.0")
print(pkgs)
print(len(pkgs))
For some reason that I can't get to the bottom of, I am not getting the correct number of files returned by repo.get_contents(conda_meta, ref=branch). For the branch that I am specifying, when that branch is checked out I am seeing 186 files in the conda-meta folder. However, repo.get_contents(conda_meta, ref=branch) returns only 182, I am missing four JSON files.
Is there some limitation to repo.get_contents that I'm not aware of? I've been reading the doc but can't find anything that hints at the problem I am having. There is one bit about it only handling files up to 1mb, but I am seeing files larger than this returned (e.x: python is 1.204mb and is returned in the list of files). I believe this just applies to reading file content over 1mb, which I deal with by using the GitHub Data API (REST) further downstream. Is there something I'm doing wrong here?
Thanks for reading, any help with this is much appreciated!
Update with solution!
The Problem:
After some more digging, I have found the problem's cause. It's not to do with the code above or repo.get_contents(conda_meta, ref=branch) specifically. It is actually a unix/windows clash that was mistakenly introduced into our repository for this specific branch "xxx/release/3.2.0" but not present in others.
So what was the problem? NTFS (and Windows more broadly) by default is case insensitive, but Git is from a Unix world and is case-sensitive by default
We inadvertently created two folders for Python in the bin directory of the conda_meta path (xxxxxx/bin/), one folder called "Python" and one called "python" (note the lower-case). When pulling the repository locally, only the "Python" folder shows up containing all 168 files. On GitHub, however, the path with "Python" contains 182 files while the path with "python" contains the remaining 4 files.
The Solution:
Solution is to add a conda_meta_folders parameter that takes a list of paths to parse_conda_meta and search each directory. There might be a slicker solution though, I'm looking into whether it is possible to do something like git config core.ignorecase true with the PyGithub API. Does anyone know if it is possible to have PyGithub honor this or be configured for this?

Printing changed file paths in latest commit in gitpython

I'm trying to get the changed file paths between the latest commit and the one before it in git python.
The problem is even if the latest commit has 1 changed file its displaying a lot more.
Below is my code:-
repo = git.Repo(path)
commits_list = list(repo.iter_commits())
a_commit = commits_list[0]
b_commit = commits_list[-1]
itemDiff = a_commit.diff(b_commit)
for item in itemDiff
print(item.a_path)
I'm trying this against a local cloned repo. What am I doing wrong?
If you need to read from the repo, consider using GitPython's abstraction Pydriller.
for commit in RepositoryMining("repo").traverse_commits():
for modified_file in commit.modifications:
modified_file.new_path # here you have the path of all the files changed in the commit

GitPython pull/fetch retrieve progress

How do I retrieve the file list when pull/fetch? Assume that the upstream is set and pulling the upstream.
repo = git.Repo('/repo_location/')
result = repo.git.pull()
According to the API reference, it says the return is iterable list. but i'm unable to use it that way.
If I do print(result) it'll correctly prints to stdout, but not when I iterate.
I found no gitpython-method to do this, so I executed git diff-tree:
repo = Repo(repo_dir)
# first get the remote information to get all changed files, than pull the repository
# has to be done in this order, because after pulling the local repository does not differ from the remote
repo.remote().fetch()
repo.remote().pull()
# the "-t"-option also lists all changed directories
diff_tree = repo.git.execute('git diff-tree --name-status -t master origin/master')
for line in diff_tree.splitlines():
change_status, file_path = line.split('\t')

Render hg.tar into in-memory tree-structure after getting back from MongoDB

I am working on a project where I have to store .hg directories. The easiest way is to pack the .hg into hg.tar. I save it in MongoDB's GridFS filesystem.
If I go with this plan, I have to read the tar out.
import tarfile, cStringIO as io
repo = get_repo(saved_repo.id)
ios = io.StringIO()
ios.write(repo.hgfile.read())
ios.seek(0)
tar = tarfile.open(mode='r', fileobj=ios)
members = tar.getmembers()
#for info in members:
# tar.extract(info.name, '/tmp')
for file in members:
print file.name, file.isdir()
This is a working code. I can get all the files and directories names as the loop continues.
My question is how do I extract this tar into a valid, file-system like directory. I can .extractfile individually into memory, but if I want to feed into Mercurial API, I probably need the entire directory as in a single DIRECTORY .hg in memory like how they exist in the filesystem.
Thoughts?
Mercurial has a concept called opener that's used to abstract filesystem access. I first looked at http://hg.intevation.org/mercurial/crew/file/tip/mercurial/revlog.py to see if you can replace the revlog class (which is the base class for changelog, manifest log and filelogs), but recent versions of Mercurial also have a VFS abstraction layer. It can be found in http://hg.intevation.org/mercurial/crew/file/8c64c4af21a4/mercurial/scmutil.py#l202 and is used by the localrepo.localrepository class for all file access.

Categories

Resources