I got a SVN repository copied onto my computer using svnsync. Now when I try to replay it using PySVN it fails at a specific revision (29762) with the message:
pysvn._pysvn_2_6.ClientError: URL 'svn://svn.zope.org/repos/main/ZODB/trunk/src/Persistence' doesn't exist
I can checkout or update until the previous revision (29761) ok but after that I get this error.
My objective is to analyze the code structure and it's evolution so I have
client.update(path,
revision=pysvn.Revision(pysvn.opt_revision_kind.number,
RevNumber),ignore_externals=False)
within a for loop that increments RevNumber
I am ok with ignoring this specific revision, so if there is a way around it that will allow my checked-out code to progress and be analyzed, that will be fine (as long as there aren't many more instances of this happening).
Nevertheless, if my repo is a copy of a working repo, why doesn't it work and how does the original one function properly?
Although the error message doesn't hint to that, I believe it was caused by running out of disk space. After deleting other files on the drive and re-running the script it worked fine.
try:
client.update(path,revision=pysvn.Revision(pysvn.opt_revision_kind.number,RevNumber),ignore_externals=False)
except:
print "Revision skipped at", RevNumber
continue
This does not solve the problem, but you can use try/ except for your code to go on, if you are ok with omitting some revisions, like you've said.
Related
I have decided to complete cs231 course and do its assignment. I happily watched the first 2 videos of the course and now I had to solve the first assignments.
I followed the guidelines step by step which was shown in the video in this link:
https://cs231n.github.io/setup-instructions/
Then, when I run the first cell, which is not the cell shown in the video but nonetheless it's in the assignments1 file which I downloaded from their site, I get a nasty error which has paralyzed me four a couple of hours. I'd be happy if anyone could respond.
IF you take a look at my picture, you'll see that files are added in the google drive, but surprisingly, it gives an error out of nowhere.
Thanks.
===========================================================================
Update:
Here is the snapshot of the video provided to guide students how to setup their google colab (in that link).
As you can see, in their vide the first chunk of code specifies their working directory but in the file that they have uploaded as their assignment1, they have not done so!
cs231n is a virtual environment according to documentation from the link u provided.
Every time you want to work on assignment you should activate that environment by source ~/cs231n/bin/activate
This happens because cs231n is not in the current path. Add these line at the beginning of the code :
import sys
sys.path.append('./cs231n')
I had the same problem. I ran the first block which imports the module 'sys' and appends the file path. After that I left and came back after 5 hours. When I log in again, I didn't run the first block. Then I came with the problem. That's the question! The new Google Drive allocated is different. Therefore, the python doesn't know where the file path is. So I tried to run the first block again and it really works. That's why you succeed by downloading the assignment1 file and running it again.
This is my opinion. Hope this can help you!
This was ridiculous!
After dealing with it for couple of hours (trying to discover the current directory using cwd command and cd) with no avail, I decided to clear ALL files from my google drive and download the assignment1 file again
Suprisingly, this time the colab page had the code in its beginning. I still wonder how that happened.
I'm trying to run an automated test in Ride.py. This test works on my colleague's computer but for some reason does not work on mine. The test starts but at a certain point i get the following error:
[ ERROR ] Calling method 'start_keyword' of listener 'C:\Python27\lib\site-packages\robotide\contrib\testrunner\TestRunnerAgent.py' failed: IndexError: list index out of range
The interesting part is that this error occurs on the same spot ever time, but with a different test it happens at a different time.
I tried to google several things and nothing worked. One solution suggested there was a '#' commented somewhere and this caused the crash. I looked but I don't see a '#' commented anywhere.
Another suggestion lead me to believe my testrunneragent.py file must have been installed wrong. I went online to find the file and replaced it. This did not work either (reran the test before and after a restart of ride)
We tried to re-import the test files thinking perhaps something went wrong there. This did not help either.
Googling juts the last part (IndexError: list index out of range) gave me the suggestion it does not recognize all the lines of code in the back-end file. I would have no clue how to solve this as im not a major coder.
One difference between me and my colleague could be the versions. I downloaded python version 2.7.16 and ride 1.7.3.1. My colleague uses an older version of both python and RIDE. Perhaps the problem could be here?
https://paste.fedoraproject.org/paste/TLekH3az0m4wuUyM8C2RYw
I expect the test will run without failing (it is a happy flow) I have included some screenshots with code in the previous segment that might help
downgraded to the same version of Ride.py
This seems to have fixed the issue
I am using GitPython to clone a repository from a Gitlab server.
git.Repo.clone_from(gitlab_ssh_URL, local_path)
Later I have another script that tries to update this repo.
try:
my_repo = git.Repo(local_path)
my_repo .remotes.origin.pull()
except (git.exc.InvalidGitRepositoryError, git.exc.NoSuchPathError):
print("Invalid repository: {}".format(local_path)
This working great except if in-between I checkout a tag like this:
tag_id = choose_tag() # Return the position of an existing tag in my_repo.tags
my_repo .head.reference = my_repo.tags[tag_id]
my_repo .head.reset(index=True, working_tree=True)
In this case I get a GitCommandError when pulling:
git.exc.GitCommandError: 'git pull -v origin' returned with exit code 1
I read the documentation twice already and I don't see where is the problème. Especially since if I try to Pull this repo with a dedicate tool like SourceTree it is working without error or warning.
I don't understand how the fact that I checked out a tagged version even with a detached HEAD prevent me from pulling.
What should I do to pull in this case?
What is happening here and what do I miss?
Edit : as advise I tried to look in the exception.stdout and exception.sterr and there is nothing useful here (respectively b'' and None). That's why I have a hard time understanding what's wrong.
I think it's good idea is to learn more about what is happening first (question 2: what is going on?), and that should guide you to the answer to question 1 (how to fix this?).
To know more about what went wrong you can print out stdout and stderr from the exception. Git normally prints error details to the console, so something should be in stdout or stderr.
try:
git.Repo.clone_from(gitlab_ssh_URL, local_path)
except git.GitCommandError as exception:
print(exception)
if exception.stdout:
print('!! stdout was:')
print(exception.stdout)
if exception.stderr:
print('!! stderr was:')
print(exception.stderr)
As a side note, I myself had some issues few times when I did many operations on the git.Repo object before using it to interact with the back-end (i.e. the git itself). In my opinion there sometimes seem to be some data caching issues on the GitPython side and lack of synchronisation between data in the repository (.git directory) and the data structures in the git.Repo object.
EDIT:
Ok, the problem seems to be with pulling on detached head - which is probably not what you want to do anyway.
Still, you can work around your problem. Since from detached head you do git checkout master only in order to do git pull and then you go back to the detached head, you could skip pulling and instead use git fetch <remote> <source>:<destination> like this: git fetch origin master:master. This will fetch remote and merge your local master branch with tracking branch without checking it out, so you can remain in detached head state all along without any problems. See this SO answer for more details on the unconventional use of fetch command: https://stackoverflow.com/a/23941734/4973698
With GitPython, the code could look something like this:
my_repo = git.Repo(local_path)
tag_id = choose_tag() # Return the position of an existing tag in my_repo.tags
my_repo.head.reference = my_repo.tags[tag_id]
my_repo.head.reset(index=True, working_tree=True)
fetch_info = my_repo.remotes.origin.fetch('master:master')
for info in fetch_info:
print('{} {} {}'.format(info.ref, info.old_commit, info.flags))
and it would print something like this:
master 11249124f123a394132523513 64
... so flags equal 64. What does it mean? When you do print(git.FetchInfo.FAST_FORWARD) the result is 64 so that means that the fetch was of Fast-Forward type, and that therefore your local branch was successfully merged with remote tracking branch, i.e. you have executed git pull origin master without checking out the master.
Important note: this kind of fetch works only if your branch can be merged with remote branch using fast-forward merge.
The answer is that trying to pull on a detach head is not a good idea even if some smart client handle this case nicely.
So my solution in this case is to checkout the last version on my branch (master), pull then checkout again the desired tagged version.
However we can regret the poor error message given by GitPython.
What I would like it is to run a script that automatically checks for new assets (files that aren't code) that have been submitted to a specific directory, and then every so often automatically commit those files and push them.
I could make a script that does this through the command line, but I was mostly curious if mercurial offered any special functionality for this, specifically I'd really like some kind of return error code so that my script will know if the process breaks at any point so I can send an email with the error to specific developers. For example if for some reason the push fails because a pull is necessary first, I'd like the script to get a code so that it knows this and can handle it properly.
I've tried researching this and can only find things like automatically doing a push after a commit, which isn't exactly what I'm looking for.
You can always check exit-code of used commands
hg add (if new, unversioned files appeared in WC) "Returns 0 if all files are successfully added": non-zero means "some troubles here, not all files added"
hg commit "Returns 0 on success, 1 if nothing changed": 1 means "no commit, nothing to push"
hg push "Returns 0 if push was successful, 1 if nothing to push"
I am trying to create a simple python script that deploys my EAR file to the AdminServer of Weblogic. I have searched the internet and the documentation provided by Oracle, but I cannot find a way to determine if the application has been previously deployed. I would like my script to check if it has been, and if so, issue a redeploy command. If not, issue a deploy command.
I have tried to modify example scripts I've found, and although they've worked, they are not behaving as intended. One of the things I was trying to do was check (using the cd command) if my EAR was in the deployments folder of WebLogic and if it was, issue the redeploy. If not, it should throw an Exception, where I would issue the deploy. However, an Exception is thrown everytime when I issue the cd command in my script:
try:
print 'Checking for the existence of the ' + applicationName + ' application.....'
cd('C:\\Oracle\\Middleware\\user_projects\\domains\\base_domain\\config\\deployments\\MyTestEAR.ear\\')
print 'Redeploying....'
#Commands to redeploy....
except WLSTException:
#Commands to deploy
I'm running this script on Windows using execfile("C:\MyTestDeployer.py") command after setting my environment variables using the WLST Scripting Tool. Any ideas? I've also tried to use a different path in my cd command, but to no avail. Any ideas?
It works for me:
print 'stopping and undeploying ...'
try:
stopApplication('WebApplication')
undeploy('WebApplication')
print 'Redeploying...'
except Exception:
print 'Deploy...'
deploy('WebApplication', '/home/saeed/project/test/WebApplication/dist/WebApplication.war')
startApplication('WebApplication2')
I've done something like that in the past, but with a different approach...
I've used the weblogic.Deployer interface with the -listapps option to list the apps/libraries deployed to the domain, which I would then compare to the the display-name element of application.xml generated in the archive
The problem I've found using plain file-names, in my case, was that archives came with the date in which they were generated. Which would lead to a always false comparison.
Using the display-name, I've standardized the app name that would be deployed, and later on compared to a new archive to be redeployed.
Use the command listApplications() in Online mode to list all applications that are currently deployed in the WebLogic domain.
In the event of an error, the command returns a WLSTException.
Example :
wls:/mydomain/serverConfig> listApplications()
SamplesSearchWebApp
asyncServletEar
jspSimpleTagEar
ejb30
webservicesJwsSimpleEar
ejb20BeanMgedEar
xmlBeanEar
extServletAnnotationsEar
examplesWebApp
apache_xbean.jar
mainWebApp
jdbcRowSetsEar
Source : link