Image build failed, but not on forks

For this repository:

We get the following error for the image build:

1   Running with gitlab-runner 12.2.0 (a987417a)
2   on renkulab-runner-1TBdisk S25j9XrP
3
Using Docker executor with image docker:stable ...
00:07
4 Pulling docker image docker:stable ...
5 Using docker image sha256:f038f0462ba57cd4635fffba0f75f3f4f7421775ce041956e3af0fee613b227d for docker:stable ...
7
Running on runner-S25j9XrP-project-2962-concurrent-0 via e78ef7c33048...
00:02
9
Fetching changes with git depth set to 50...
02:02
10 Initialized empty Git repository in /builds/gitlab/gitanjali.thakur/fpar-data/.git/
11 Created fresh repository.
12 error: RPC failed; curl 52 Empty reply from server
13 fatal: the remote end hung up unexpectedly
16 ERROR: Job failed: exit code 1

But when we fork the repo, we do not have that same problem, for example here:

I understood from a related topic that it has to do with the large files, is there something we can do about this?

It is really weird, as I was able to build the environment on a fork. I also submitted an issue here: https://github.com/SwissDataScienceCenter/renku/issues/1112

Could you try adding

GIT_DEPTH: 1

under variables in your .gitlab-ci.yml file?

The ideal way of fixing the repo is to migrate all of the large git objects to LFS, but it will require a force-push to your project.

I added that, but still get the same response:

1 Running with gitlab-runner 12.2.0 (a987417a)
2   on renkulab-runner-1TBdisk S25j9XrP
3
Using Docker executor with image docker:stable ...
00:07
4 Pulling docker image docker:stable ...
5 Using docker image sha256:f038f0462ba57cd4635fffba0f75f3f4f7421775ce041956e3af0fee613b227d for docker:stable ...
7
Running on runner-S25j9XrP-project-2962-concurrent-0 via e78ef7c33048...
00:02
9
Fetching changes with git depth set to 1...
01:02
10 Initialized empty Git repository in /builds/gitlab/gitanjali.thakur/fpar-data/.git/
11 Created fresh repository.
12 error: RPC failed; HTTP 504 curl 22 The requested URL returned error: 504 
13 fatal: the remote end hung up unexpectedly
16 ERROR: Job failed: exit code 1

We could roll-back to the commit before the data was added, and then do it with lfs. Do you think that helps? Or will it still have problems with these big files then?

yes that would solve the problem most likely! If you can do that it would be the best approach. Just make sure to afterwards sync your forks with the parent repo.

Thanks a lot for your quick help, @rrrrrok! But do you know why forking the repo removed the problem for me? I did not have the GIT_DEPTH: 1 setting in .gitlab-ci.yml. Just wondering if it could also work in the opposite way, that a repo works for me but not if someone else forks it.

I’m not sure - my best guess is that on a fork not all of the history gets recreated but I’m also not sure why that would be the case. I’ll try to look into it.

We got into a similar situation again, but even worse: https://github.com/SwissDataScienceCenter/renku/issues/1218
Now I cannot fork (takes forever) and even if I create an environment based on a commit from 2 weeks ago, it will show an empty folder.

I’m trying to clone that project locally - it has lots of big objects checked into git. I’m not entirely sure how or why, but that’s what is causing it to misbehave. I would recommend using git lfs migrate to move the big git objects into git lfs and rewrite the history on this project (if it’s one you want to keep around). Let me know if you need some guidance on how to do this.

Thanks for this suggestions, totally worked!

Great to hear @rcnijzink! Did you manage to rewrite the history?

Yes, that worked, needed a forced push for it though, but guess that makes sense.

1 Like