Trying to set up the svn commit with trac using this script.
It is being called without issue, but the problem is this line here:
144 repos = self.env.get_repository()
Because I am calling this remotely self.env_get_repository() looks for the repository using the server drive and not the local drive mapping. That is, it is looking for E:/Projects/svn/InfoProj and not Y:/Projects/sv/InfoProj
I noticed a changeset on the trac set for being able to call get_repository() and passing in the path as the variable, but it seems this hasn't made it into the latest stable release yet.
This version of the script (the one submitted by code monkey) appears to do things differently, but is throwing an error that seems related:
154 if url is None:
155 url = self.env.config.get('project', 'url')
156 self.env.href = Href(url)
157 self.env.abs_href = Href(url)
Lines 156 / 157 throw error: Warning: TypeError: 'str' object is not callable
The 10.3 stable version of the script throws a completely different error:
Warning: NameError: global name 'core' is not defined
I'm setting up trac for the first time on a Windows box with a remote repository. I'm using trac 0.11 stable with Python 2.6.
I thought there would have been a lot more people out there trying to commit across servers who had come across this problem. I've looked around and couldn't find a solution. I'm supposing Linux has a more graceful way of handling this.
Thanks in advance.
This is totally do-able and just requires a couple of small hacks... woo hoo!
The problem I was having is that get_repository reads the value of the svn repository from the trac.ini file. This was pointing at E:/ and not at Y:/. The simple fix involves a check to see if the repository is at repository_dir and if not, then check at a new variable remote_repository_dir. The second part of the fix involves removing the error message from cache.py that checks to see if the current repository address matches the one being passed in.
As always, use this at your own risk and back everything up before hand!!!
First open you trac.ini file and add a new variable 'remote_repository_dir' underneath the 'repository_dir' variable. Remote repository dir will point to the mapped drive on your local machine. It should now look something like this:
repository_dir = E:/Projects/svn/InfoProj
remote_repository_dir = Y:/Projects/svn/InfoProj
Next we will modify the api.py file to check for the new variable if it can't find the repository at the repository_dir location. Around :71 you should have something like this:
repository_dir = Option('trac', 'repository_dir', '',
"""Path to local repository. This can also be a relative path
(''since 0.11'').""")
Underneath this line add:
remote_repository_dir = Option('trac', 'remote_repository_dir', '',
"""Path to remote repository.""")
Next near :156 you will have this:
rtype, rdir = self.repository_type, self.repository_dir
if not os.path.isabs(rdir):
rdir = os.path.join(self.env.path, rdir)
Change that to this:
rtype, rdir = self.repository_type, self.repository_dir
if not os.path.isdir(rdir):
rdir = self.remote_repository_dir
if not os.path.isabs(rdir):
rdir = os.path.join(self.env.path, rdir)
Finally you will need to remove the alert in the cache.py file (note this is not the best way to do this, you should be able to include the remote variable as part of the check, but for now it works).
In cache.py near :97 it should look like this:
if repository_dir:
# directory part of the repo name can vary on case insensitive fs
if os.path.normcase(repository_dir) != os.path.normcase(self.name):
self.log.info("'repository_dir' has changed from %r to %r"
% (repository_dir, self.name))
raise TracError(_("The 'repository_dir' has changed, a "
"'trac-admin resync' operation is needed."))
elif repository_dir is None: #
self.log.info('Storing initial "repository_dir": %s' % self.name)
cursor.execute("INSERT INTO system (name,value) VALUES (%s,%s)",
(CACHE_REPOSITORY_DIR, self.name,))
else: # 'repository_dir' cleared by a resync
self.log.info('Resetting "repository_dir": %s' % self.name)
cursor.execute("UPDATE system SET value=%s WHERE name=%s",
(self.name, CACHE_REPOSITORY_DIR))
We are going to remove the first part of the if statement so it now should look like this:
if repository_dir is None: #
self.log.info('Storing initial "repository_dir": %s' % self.name)
cursor.execute("INSERT INTO system (name,value) VALUES (%s,%s)",
(CACHE_REPOSITORY_DIR, self.name,))
else: # 'repository_dir' cleared by a resync
self.log.info('Resetting "repository_dir": %s' % self.name)
cursor.execute("UPDATE system SET value=%s WHERE name=%s",
(self.name, CACHE_REPOSITORY_DIR))
Warning! Doing this will mean that it no longer gives you an error if your directory has changed and you need a resync.
Hope this helps someone.
Related
Not sure this is the right place to ask but I'll go ahead anyway.
I want to modify the aws s3 sync command so that it does not override newer files on the destination. Currently it compares the file size and timestamp and if either are different then the file at the source is copied to the destination.
Having a look at the AWS CLI source code it seems possible to simply modify the python code here
class SizeAndLastModifiedSync(BaseSync):
def determine_should_sync(self, src_file, dest_file):
same_size = self.compare_size(src_file, dest_file)
same_last_modified_time = self.compare_time(src_file, dest_file)
should_sync = (not same_size) or (not same_last_modified_time)
if should_sync:
LOG.debug(
"syncing: %s -> %s, size: %s -> %s, modified time: %s -> %s",
src_file.src, src_file.dest,
src_file.size, dest_file.size,
src_file.last_update, dest_file.last_update)
return should_sync
But not being much of a python expert I am not sure there are other considerations that would prevent these modifications from working.
It seems all I would need to do is add a check to see if the dest_file timestamp is greater than the src_file timestamp - after resolving things like timezones etc.
Any pointers on the best approach would be appreciated.
I'm currently running Pygit 0.24.1 (along with libgit 0.24.1), working on a repository where I have two branches (say prod and dev).
Every change is first commited to the dev branch and pushed to the remote repository. To do that, I have this piece of code:
repo = Repository('/foo/bar')
repo.checkout('refs/heads/dev')
index = repo.index
index.add('any_file')
index.write()
tree = index.write_tree()
author = Signature('foo', 'foo#bar')
committer = Signature('foo', 'foo#bar')
repo.create_commit('refs/heads/dev', author, committer, 'Just another commit', tree, [repo.head.get_object().hex])
up = UserPass('foo', '***')
rc = RemoteCallbacks(credentials=up)
repo.remotes['origin'].push(['refs/heads/dev'], rc)
This works quite fine, I can see the local commit and also the remote commit, and the local repo remains clean:
nothing to commit, working directory clean
Next, I check-out to the prod branch and I want to merge the HEAD commit on dev. To do so, I use this other piece of code (assuming I always start checked-out to the dev branch):
head_commit = repo.head
repo.checkout('refs/heads/prod')
prod_branch_tip = repo.lookup_reference('HEAD').resolve()
prod_branch_tip.set_target(head_commit.target)
rc = RemoteCallbacks(credentials=up)
repo.remotes['origin'].push(['refs/heads/prod'], rc)
repo.checkout('refs/heads/dev')
I actually can see the branch being merged both locally and remotely, but after this piece of code runs, the commited file always remains in a modified state on branch dev.
On branch dev
Changes to be committed:
(use "git reset HEAD ..." to unstage)
modified: any_file
I'm completely sure noone is modifying that file, though. Actually, a git diff shows nothing. This issue happens only with already commited files (i.e., files that have been commited at least once previously). When files are new, this works perfectly and leaves the file in a clean state.
I'm sure I'm missing some detail but I'm unable to find out what is it. Why is the file left as modified?
EDIT: Just to clarify, my aim is to do a FF (Fast-Forward) merge. I know there's some documentation about doing a non-FF merge in the Pygit2 documentation, but I'd prefer the first method because it keeps commit hashes through branches.
EDIT 2: After #Leon's comment, I double checked and indeed, git diff shows no output while git diff --cached shows the content that the file had before commiting. That's odd since I can see the change successfully commited on the local and remote repositories, but it looks like afterwards the file is changed again to the previous content...
An example of that:
Having a file with content '12345' commited + pushed, I replace that string with '54321'
I run the code above
git log shows the file commited correctly, on the remote repo I see the file with content '54321', while locally git diff --cached shows this:
## -1 +1 ##
-54321
+12345
I would explain the observed problem as follows:
head_commit = repo.head
# This resets the index and the working tree to the old state
# and records that we are in a state corresponding to the commit
# pointed to by refs/heads/prod
repo.checkout('refs/heads/prod')
prod_branch_tip = repo.lookup_reference('HEAD').resolve()
# This changes where refs/heads/prod points. The index and
# the working tree are not updated, but (probably due to a bug in pygit2)
# they are not marked as gone-out-of-sync with refs/heads/prod
prod_branch_tip.set_target(head_commit.target)
rc = RemoteCallbacks(credentials=up)
repo.remotes['origin'].push(['refs/heads/prod'], rc)
# Now we must switch to a state corresponding to refs/heads/dev. It turns
# out that refs/heads/dev points to the same commit as refs/heads/prod.
# But we are already in the (clean) state corresponding to refs/heads/prod!
# Therefore there is no need to update the index and/or the working tree.
# So this simply changes HEAD to refs/heads/prod
repo.checkout('refs/heads/dev')
The solution is to fast-forward the branch without checking it out. The following code is devoid of the described problem:
head_commit = repo.head
prod_branch_tip = repo.lookup_branch('prod')
prod_branch_tip.set_target(head_commit.target)
rc = RemoteCallbacks(credentials=up)
repo.remotes['origin'].push(['refs/heads/prod'], rc)
How can I use GitPython to determine whether:
My local branch is ahead of the remote (I can safely push)
My local branch is behind the remote (I can safely pull)
My local branch has diverged from the remote?
To check if the local and remote are the same, I'm doing this:
def local_and_remote_are_at_same_commit(repo, remote):
local_commit = repo.commit()
remote_commit = remote.fetch()[0].commit
return local_commit.hexsha == remote_commit.hexsha
See https://stackoverflow.com/a/15862203/197789
E.g.
commits_behind = repo.iter_commits('master..origin/master')
and
commits_ahead = repo.iter_commits('origin/master..master')
Then you can use something like the following to go from iterator to a count:
count = sum(1 for c in commits_ahead)
(You may want to fetch from the remotes before running iter_commits, eg: repo.remotes.origin.fetch())
This was last checked with GitPython 1.0.2.
The following worked better for me, which I got from this stackoverflow answer
commits_diff = repo.git.rev_list('--left-right', '--count', f'{branch}...{branch}#{{u}}')
num_ahead, num_behind = commits_diff.split('\t')
print(f'num_commits_ahead: {num_ahead}')
print(f'num_commits_behind: {num_behind}')
I have a Jython script that is used to set up a JDBC datasource on a Websphere 7.0 server. I need to set several properties on that datasource. I am using this code, which works, unless value is '-'.
def setCustomProperty(datasource, name, value):
parms = ['-propertyName', name, '-propertyValue', value]
AdminTask.setResourceProperty(datasource, parms)
I need to set the dateSeparator property on my datasource to just that - a dash. When I run this script with setCustomProperty(ds, 'dateSeparator', '-') I get an exception that says, "Invalid property: ". I figured out that it thinks that the dash means that another parameter/argument pair is expected.
Is there any way to get AdminTask to accept a dash?
NOTE: I can't set it via AdminConfig because I cannot find a way to get the id of the right property (I have multiple datasources).
Here is a solution that uses AdminConfig so that you can set the property value to the dash -. The solution accounts for multiple data sources, finding the correct one by specifying the appropriate scope (i.e. the server, but this could be modified if your datasource exists within a different scope) and then finding the datasource by name. The solution also accounts for modifying the existing "dateSeparator" property if it exists, or it creates it if it doesn't.
The code doesn't look terribly elegant, but I think it should solve your problem :
def setDataSourceProperty(cell, node, server, ds, propName, propVal) :
scopes = AdminConfig.getid("/Cell:%s/Node:%s/Server:%s/" % (cell, node, server)).splitlines()
datasources = AdminConfig.list("DataSource", scopes[0]).splitlines()
for datasource in datasources :
if AdminConfig.showAttribute(datasource, "name") == ds :
propertySet = AdminConfig.list("J2EEResourcePropertySet", datasource).splitlines()
customProp = [["name", propName], ["value", propVal]]
for property in AdminConfig.list("J2EEResourceProperty", propertySet[0]).splitlines() :
if AdminConfig.showAttribute(property, "name") == propName :
AdminConfig.modify(property, customProp)
return
AdminConfig.create("J2EEResourceProperty", propertySet[0], customProp)
if (__name__ == "__main__"):
setDataSourceProperty("myCell01", "myNode01", "myServer", "myDataSource", "dateSeparator", "-")
AdminConfig.save()
Please see the Management Console preferences settings. You can do what you are attempting now and you should get to see the Jython equivalent that the Management Console is creating for its own use. Then just copy it.
#Schemetrical solution worked for me. Just giving another example with jvm args.
Not commenting on the actual answer because I don't have enough reputation.
server_name = 'server1'
AdminTask.setGenericJVMArguments('[ -serverName %s -genericJvmArguments "-agentlib:getClasses" ]' % (server_name))
Try using a String instead of an array to pass the parameters using double quotes to surround the values starting with a dash sign
Example:
AdminTask.setVariable('-variableName JDK_PARAMS -variableValue "-Xlp -Xscm250M" -variableDescription "-Yes -I -can -now -use -dashes -everywhere :-)" -scope Cell=MyCell')
In my efforts to resolve Python issue 1578269, I've been working on trying to resolve the target of a symlink in a robust way. I started by using GetFinalPathNameByHandle as recommended here on stackoverflow and by Microsoft, but it turns out that technique fails when the target is in use (such as with pagefile.sys).
So, I've written a new routine to accomplish this using CreateFile and DeviceIoControl (as it appears this is what Explorer does). The relevant code from jaraco.windows.filesystem is included below.
The question is, is there a better technique for reliably resolving symlinks in Windows? Can you identify any issues with this implementation?
def relpath(path, start=os.path.curdir):
"""
Like os.path.relpath, but actually honors the start path
if supplied. See http://bugs.python.org/issue7195
"""
return os.path.normpath(os.path.join(start, path))
def trace_symlink_target(link):
"""
Given a file that is known to be a symlink, trace it to its ultimate
target.
Raises TargetNotPresent when the target cannot be determined.
Raises ValueError when the specified link is not a symlink.
"""
if not is_symlink(link):
raise ValueError("link must point to a symlink on the system")
while is_symlink(link):
orig = os.path.dirname(link)
link = _trace_symlink_immediate_target(link)
link = relpath(link, orig)
return link
def _trace_symlink_immediate_target(link):
handle = CreateFile(
link,
0,
FILE_SHARE_READ|FILE_SHARE_WRITE|FILE_SHARE_DELETE,
None,
OPEN_EXISTING,
FILE_FLAG_OPEN_REPARSE_POINT|FILE_FLAG_BACKUP_SEMANTICS,
None,
)
res = DeviceIoControl(handle, FSCTL_GET_REPARSE_POINT, None, 10240)
bytes = create_string_buffer(res)
p_rdb = cast(bytes, POINTER(REPARSE_DATA_BUFFER))
rdb = p_rdb.contents
if not rdb.tag == IO_REPARSE_TAG_SYMLINK:
raise RuntimeError("Expected IO_REPARSE_TAG_SYMLINK, but got %d" % rdb.tag)
return rdb.get_print_name()
Unfortunately I can't test with Vista until next week, but GetFinalPathNameByHandle should work, even for files in use - what's the problem you noticed?
In your code above, you forget to close the file handle.