New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and
privacy statement. We’ll occasionally send you account related emails.
Already on GitHub?
Sign in
to your account
Comments
I’m running docker-gitlab with a pretty standard setup (see below) behind a SSL-terminating nginx (on the docker host). Normally everything is running perfectly, cloning over ssh works like a charm. But sometimes cloning over https fails with an error message:
~/Desktop → git clone https://example.com/foobar/foo.git
Cloning into 'foo'...
fatal: unable to access 'https://example.com/foobar/foo.git/': transfer closed with 2737 bytes remaining to read
I’ve looked into the logs. My SSL-terminator (nginx) says upstream prematurely closed connection while reading upstream
, so I guess it’s not at fault. Next I looked into the nginx log at /var/log/gitlab/nginx/gitlab_error.log
, and found this:
2014/12/15 16:26:02 [alert] 282#0: *1246 readv() failed (13: Permission denied) while reading upstream
I can’t figure out the reason. Obviously the socket has the correct permissions, otherwise it wouldn’t work at all. I also found that improper permissions on the proxy_temp folder can cause this, but the permissions on /var/lib/nginx/proxy/
look fine.
I’d be very happy to help debug this further, but I don’t know where to start. Any ideas?
docker-gitlab variables are:
GITLAB_HOST: ...
GITLAB_PORT: 443
GITLAB_EMAIL: ...
GITLAB_HTTPS: true
GITLAB_HTTPS_HSTS: true
GITLAB_HTTPS_HSTS_MAXAGE: 2592000
JIRA_URL: ...
SMTP_USER: ...
SMTP_PASS: ...
lucas-clemente
changed the title
Error: transfer closed with xxx bytes remaining to read when cloning over HTTPS
Error: «transfer closed with xxx bytes remaining to read» when cloning over HTTPS
Dec 16, 2014
@lucas-clemente this is some problem with libssl package in ubuntu. I had the same issue a while back and had found some online resources to fix it, which if i remember correctly required compiling libssl from source.
I avoided compiling libssl from source for various reasons, I just decided to wait it out hoping that the issue would automatically be resolved when ubuntu/debian updates the libssl package. So far it seems like it has not been the case.
Thanks for your reply! I couldn’t find anything on openssl / nginx and the quoted error message, do you maybe still remember where you read something about it?
Somehow I don’t understand how it can be a problem with libssl though. As I wrote above, the TLS connection is terminated from nginx running directly on the docker host, proxying to the gitlab docker container. However, the Permission denied
error appears on the nginx proxy running inside the container. But then again, I might just be misinterpreting things
Happy new year!
I will try to look it up and let you know.
On Friday 02 January 2015 07:49 PM, Lucas Clemente wrote:
Thanks for your reply! I couldn’t find anything on openssl / nginx and
the quoted error message, do you maybe still remember where you read
something about it?Somehow I don’t understand how it can be a problem with libssl though.
As I wrote above, the TLS connection is terminated from nginx running
/directly on the docker host/, proxying to the gitlab docker container.
However, the |Permission denied| error appears on the nginx proxy
running /inside the container/. But then again, I might just be
misinterpreting things![]()
Happy new year!
—
Reply to this email directly or view it on GitHub
#226 (comment).
@lucas-clemente I was checking this issue today and found that the git process prematurely gives up on large repos.
# git clone https://git.example.com/opensource/openwrt.git Cloning into 'openwrt'... remote: Counting objects: 290509, done. remote: Compressing objects: 100% (79939/79939), done. error: RPC failed; result=56, HTTP code = 200MiB | 8.62 MiB/s fatal: The remote end hung up unexpectedly fatal: early EOF fatal: index-pack failed
This happens on both http and https urls. Worth nothing is that it works sometimes. Next I tried connecting directly to the containers http port
# git clone http://$(docker inspect --format {{.NetworkSettings.IPAddress}} gitlab)/opensource/openwrt.git -v Cloning into 'openwrt'... POST git-upload-pack (240 bytes) remote: Counting objects: 290509, done. remote: Compressing objects: 100% (79939/79939), done. remote: Total 290509 (delta 198066), reused 289751 (delta 197308) Receiving objects: 100% (290509/290509), 106.91 MiB | 20.02 MiB/s, done. Resolving deltas: 100% (198066/198066), done. Checking connectivity... done.
And had no problems cloning using the git http clone url. Maybe you can give it a shot as well. If you see this behaviour as well, then it appears that there would be some configuration required at the reverse proxy.
I found the following reports of the same issue
- https://github.com/gitlabhq/gitlabhq/issues/6832
- https://gitlab.com/gitlab-org/gitlab-ce/issues/232
hi,
i have the same problem with big repositories. i increased the unicorn timeout as well as the workers, but now the unicorn workerkiller stops everything. now i patch
https://gitlab.com/gitlab-org/gitlab-ce/blob/master/config.ru
to increase the memory settings. it would be great for your dockerfile to have some settings for this too.
solved!
multiple problems, but
- first patched memory setting for the unicorn worker
- then used a modern git (ubuntu 14.04 did not work for me)
- and next: we had the docker image behind another loadbalancer (a farm of nginx’s). they do proxy-buffering with a max-tempfilesize (http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_max_temp_file_size) of 1G. and this frontend-ELB cutted of my clone. so i added
add_header X-Accel-Buffering no;
to the nginx inside your docker container
and now it works! i don’t know if it would be a good idea to put this setting inside your dockerfile, but an option would not be too bad
@ulrichSchreiner cool. will try out your changes and make the changes as required. thanks
hi,
one additional note: i did a git clone https://...
from a WAN client to a loadbalancer with a nginx which then connected to your dockercontainer which also contains a nginx. my repo was about ~1.2GB of data and gitlab pushed out the bytes really fast to your nginx. an your nginx pushed out the bytes to our sencond nginx (our LB). but here we could not deliver the bytes as fast as we wanted (the client was connected via a slow line) and so this LB buffered, because the default is on
(http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering). and the default for the temp-file-size is 1GB. so this all lead to my problem: after 1GB of data the stream of data ended and my client received an error.
but i think if a (slow) client connects directly to your nginx you would have the same problem, because you also use proxy_buffering and the default size for temp-file-size is also 1GB.
-> if you have a loadbalancer in front, it would be a good option to include the add_header X-Accel-Buffering no;
(optional). But if you do not have a LB in front, you should have the options to disable proxy-buffering and/or set the size for the proxy_max_temp_file_size
.
hi all,
i experience this problem when using gitlab-ci as it’s pulling the repo via https. can anyone please be more detailed what i have to adjust so that it works?
maybe i should say that the concerning repo is not that big (~15 MB) and that it sometimes works (third or forth retry)
@smoebody I was trying to reproduce this issue recently to test a probable fix. However I was not able to reproduce the issue. Can you try a fix for me.
- Login to the running gitlab container
docker exec -it gitlab bash
- Edit
/etc/nginx/sites-enabled/gitlab
server block to replace
upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket fail_timeout=0; }
with
upstream gitlab { server 127.0.0.1:8080 fail_timeout=0; }
- Stop the nginx server
supervisorctl stop nginx
- Start the nginx server
supervisorctl start nginx
- Try cloning via http/https clone url
- Let me know if it fixes the issue.
- If you are able to clone, test many more times to be absolutely sure. Because when I was seeing this issue it was hit or miss. It would work sometimes and would fail other times.
If this does not resolve the issue, revert the above changes and try the changes from PR #269. You only need to make the changes to /etc/nginx/sites-enabled/gitlab
where you want to set the following options in the nginx configuration
add_header X-Accel-Buffering no; proxy_buffering off;
Refer to the PR for the appropriate place to insert these settings in the nginx configuration. Once you have these changes in place stop and start the nginx server the same way as described above, i.e. steps 3 and 4.
Let me know the results.
@smoebody fyi, supervisorctl status
will let you know if all the daemons are running
@sameersbn, sorry for the delay.
i tried the your first suggestion by changing from unix-socket to tcp-socket and it seems that did the trick, although i would have preferred the second fix to solve the problem as its more plausible to me
But for now i did 5 build retries without the error occuring again.
thanks again for your help!
@sameersbn no i did not. as i said, the first fix solved the problem, there was no need to try the second.
@smoebody ok.. can you revert the first fix and apply the second fix and let me know if resolves the issue as well?
@sameersbn ok, with the issue i get the following error:
2015/04/08 16:07:43 [alert] 2773#0: *15 readv() failed (13: Permission denied) while reading upstream, client: 172.17.0.116, server: gitlab.xxx.xxx, request: "GET /xxx/xxx.git/info/refs?service=git-upload-pack HTTP/1.1", upstream: "http://unix:/home/git/gitlab/tmp/sockets/gitlab.socket:/xxx/xxxgit/info/refs?service=git-upload-pack", host: "gitlab.xxx.xxx"
as you can see, nginx prefixes the socket with an http:// sometimes… thats when its not working.
both fixes mentioned above resolve the issue. for the second fix inserting the proxy_buffering off
was enough.
when retrieving data from a URL using curl, I sometimes (in 80% of the cases) get
error 18: transfer closed with outstanding read data remaining
Part of the returned data is then missing. The weird thing is that this does never occur when the CURLOPT_RETURNTRANSFER is set to false, that is the curl_exec function doesn’t return the data but displays the content directly.
What could be the problem? Can I set some of the options to avoid such behaviour?
BenMorel
33.3k49 gold badges174 silver badges309 bronze badges
asked Nov 18, 2009 at 23:52
3
The error string is quite simply exactly what libcurl sees: since it is receiving a chunked encoding stream it knows when there is data left in a chunk to receive. When the connection is closed, libcurl knows that the last received chunk was incomplete. Then you get this error code.
There’s nothing you can do to avoid this error with the request unmodified, but you can try to work around it by issuing a HTTP 1.0 request instead (since chunked encoding won’t happen then) but the fact is that this is most likely a flaw in the server or in your network/setup somehow.
answered Dec 4, 2009 at 18:52
Daniel StenbergDaniel Stenberg
51.9k14 gold badges141 silver badges211 bronze badges
5
I bet this is related to a wrong Content-Length
header sent by the peer.
My advice is to let curl set the length by itself.
answered Nov 19, 2009 at 8:17
Christophe EbléChristophe Eblé
8,0113 gold badges32 silver badges30 bronze badges
4
Seeing this error during the use of Guzzle as well. The following header fixed it for me:
'headers' => [
'accept-encoding' => 'gzip, deflate',
],
I issued the request with Postman which gave me a complete response and no error.
Then I started adding the headers that Postman sends to the Guzzle request and this was the one that fixed it.
answered Apr 30, 2019 at 10:16
rambiirambii
4314 silver badges11 bronze badges
3
I had the same problem, but managed to fix it by suppressing the ‘Expect: 100-continue’ header that cURL usually sends (the following is PHP code, but should work similarly with other cURL APIs):
curl_setopt($curl, CURLOPT_HTTPHEADER, array('Expect:'));
By the way, I am sending calls to the HTTP server that is included in the JDK 6 REST stuff, which has all kinds of problems. In this case, it first sends a 100 response, and then with some requests doesn’t send the subsequent 200 response correctly.
answered Dec 4, 2009 at 15:14
5
I got this error when my server process got an exception midway during generating the response and simply closed the connection without saying goodbye. curl still expected data from the connection and complained (rightfully).
answered Jul 27, 2014 at 7:07
koljaTMkoljaTM
9,9342 gold badges39 silver badges42 bronze badges
Encountered similar issue, my server is behind nginx.
There’s no error in web server’s (Python flask) log, but some error messsage in nginx log.
[crit] 31054#31054: *269464 open() «/var/cache/nginx/proxy_temp/3/45/0000000453» failed (13: Permission denied) while reading upstream
I fixed this issue by correcting the permission of directory:
/var/cache/nginx
answered May 18, 2020 at 10:33
1
I’ve solved this error by this way.
$ch = curl_init ();
curl_setopt ( $ch, CURLOPT_URL, 'http://www.someurl/' );
curl_setopt ( $ch, CURLOPT_TIMEOUT, 30);
ob_start();
$response = curl_exec ( $ch );
$data = ob_get_clean();
if(curl_getinfo($ch, CURLINFO_HTTP_CODE) == 200 ) success;
Error still occurs, but I can handle response data in variable.
answered Oct 24, 2012 at 8:45
I had this problem working with pycurl and I solved it using
c.setopt(pycurl.HTTP_VERSION, pycurl.CURL_HTTP_VERSION_1_0)
like Eric Caron says.
answered Feb 5, 2015 at 8:51
0
I got this error when my server ran out of disk space and closed the connection midway during generating the response and simply closed the connection
answered Feb 10, 2021 at 22:12
I got this error when i was accidentally downloading a file onto itself.
(I had created a symlink in an sshfs mount of the remote directory to make it available for download, forgot to switch the working directory, and used -OJ
).
I guess it won’t really »help« you when you read this, since it means your file got trashed.
answered Jan 26, 2019 at 8:54
DarklighterDarklighter
2,0121 gold badge15 silver badges21 bronze badges
I had this same problem. I tried all of these solutions but none worked. In my case, the request was working fine in Postman but when I do it with curl in php I get the error mentioned above.
What I did was check the PHP code generated by Postman and replicate the same thing.
First the request is set to use Http version 1.1
And the second most important part is the encoding for me.
Here is the code that helped me
curl_setopt($ch, CURLOPT_ENCODING, '');
curl_setopt($ch, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_1);
If I remove the CurlOpt Encoding I get back the error.
answered Apr 5, 2021 at 10:01
I got this error when running through a nginx
proxy and I was running nginx
under the user-id daemon
instead of the user id nginx
.
This means some of nginx’s scratch directories weren’t accessible / writable.
Switching from user daemon;
to user nginx;
fixed it for me.
answered Mar 25, 2021 at 16:43
it can be related to many issues. In my case, i was using Curl to build an image (via Docker api). Thus, the build was stuck that’s why i got this error.
when I fixed the build, the error disappeared.
answered Sep 8, 2021 at 8:24
We can fix this by suppressing the Expect: 100-continue header that cURL normally sends.
answered Aug 21, 2022 at 15:17
AshotAshot
334 bronze badges
1
Trying to download version(5:19.03.4~3-0~debian-stretch
) of docker-ce using dockerfile:
# apt-cache madison docker-ce
docker-ce | 5:19.03.4~3-0~debian-stretch | https://download.docker.com/linux/debian stretch/stable amd64 Packages
docker-ce | 5:19.03.3~3-0~debian-stretch | https://download.docker.com/linux/debian stretch/stable amd64 Packages
docker-ce | 5:19.03.2~3-0~debian-stretch | https://download.docker.com/linux/debian stretch/stable amd64 Packages
docker-ce | 5:19.03.1~3-0~debian-stretch | https://download.docker.com/linux/debian stretch/stable amd64 Packages
docker-ce | 5:19.03.0~3-0~debian-stretch | https://download.docker.com/linux/debian stretch/stable amd64 Packages
Below is the dockerfile:
FROM jenkins/jenkins:2.190.2
ENV DEBIAN_FRONTEND=noninteractive
# Official Jenkins image does not include sudo, change to root user
USER root
# Used to set the docker group ID
# Set to 497 by default, which is the groupID used by AWS Linux ECS instance
ARG DOCKER_GID=497
# Create Docker Group with GID
# Set default value of 497 if DOCKER_GID set to blank string by Docker compose
RUN groupadd -g ${DOCKER_GID:-497} docker
# Install base packages for docker, docker-compose & ansible
# apt-key adv --keyserver keyserver.ubuntu.com --recv-keys AA8E81B4331F7F50 &&
RUN apt-get update -y &&
apt-get -y install bc
gawk
libffi-dev
musl-dev
apt-transport-https
curl
python3
python3-dev
python3-setuptools
gcc
make
libssl-dev
python3-pip
# Used at build time but not runtime
ARG DOCKER_VERSION=5:19.03.4~3-0~debian-stretch
# Install the latest Docker CE binaries and add user `jenkins` to the docker group
RUN apt-get update &&
apt-get -y install apt-transport-https
ca-certificates
curl
gnupg-agent
software-properties-common &&
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey &&
add-apt-repository
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")
$(lsb_release -cs)
stable" &&
apt-get update &&
apt-get -y install docker-ce=${DOCKER_VERSION:-5:19.03.4~3-0~debian-stretch}
docker-ce-cli=${DOCKER_VERSION:-5:19.03.4~3-0~debian-stretch}
containerd.io &&
usermod -aG docker jenkins &&
usermod -aG users jenkins
ARG DOCKER_COMPOSE=1.24.1
# Install docker compose
RUN curl -L "https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE:-1.24.1}/docker-compose-$(uname -s)-$(uname -m)"
-o /usr/local/bin/docker-compose &&
chmod +x /usr/local/bin/docker-compose &&
pip3 install ansible boto3
# Change to jenkins user
USER jenkins
# Add jenkins plugin
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/plugins.txt
that downloads and install 5:19.03.4~3-0~debian-stretch
version of docker-ce
Disk quota on docker host:
$ sudo ls /var/lib/docker/165536.165536/
builder buildkit containers image network overlay2 plugins runtimes swarm tmp trust volumes
$ df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda7 67327600 16184040 47700464 26% /
docker-compose file creates volume(jenkins_home
) on docker host:
version: '2'
volumes:
jenkins_home:
external: true
services:
jenkins:
build:
context: .
args:
DOCKER_GID: ${DOCKER_GID}
DOCKER_VERSION: ${DOCKER_VERSION}
DOCKER_COMPOSE: ${DOCKER_COMPOSE}
volumes:
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8080:8080"
docker-compose up -d
gives below error:
Step 8/14 : ARG DOCKER_VERSION=5:19.03.4~3-0~debian-stretch
---> Running in 10f76e84e104
Removing intermediate container 10f76e84e104
---> 66b530666094
Step 9/14 : RUN apt-get update && apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common && curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable" && apt-get update && apt-get -y install docker-ce=${DOCKER_VERSION:-5:19.03.4~3-0~debian-stretch} docker-ce-cli=${DOCKER_VERSION:-5:19.03.4~3-0~debian-stretch} containerd.io && usermod -aG docker jenkins && usermod -aG users jenkins
:
:
:
Get:19 http://security.debian.org/debian-security stretch/updates/main amd64 sudo amd64 1.8.19p1-2.1+deb9u1 [1054 kB]
Get:20 https://download.docker.com/linux/debian stretch/stable amd64 docker-ce-cli amd64 5:19.03.4~3-0~debian-stretch [42.5 MB]
Err:20 https://download.docker.com/linux/debian stretch/stable amd64 docker-ce-cli amd64 5:19.03.4~3-0~debian-stretch
transfer closed with 30852358 bytes remaining to read
Get:21 https://download.docker.com/linux/debian stretch/stable amd64 docker-ce amd64 5:19.03.4~3-0~debian-stretch [22.8 MB]
Fetched 54.6 MB in 1min 8s (794 kB/s)
E: Failed to fetch https://download.docker.com/linux/debian/dists/stretch/pool/stable/amd64/docker-ce-cli_19.03.4~3-0~debian-stretch_amd64.deb transfer closed with 30852358 bytes remaining to read
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
ERROR: Service 'jenkins' failed to build: The command '/bin/sh -c apt-get update && apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common && curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable" && apt-get update && apt-get -y install docker-ce=${DOCKER_VERSION:-5:19.03.4~3-0~debian-stretch} docker-ce-cli=${DOCKER_VERSION:-5:19.03.4~3-0~debian-stretch} containerd.io && usermod -aG docker jenkins && usermod -aG users jenkins' returned a non-zero code: 100
Analysing space quota in 66b530666094
container id:
$ docker run -it 66b530666094 bash
root@565b3a0ebfc2:/#
root@565b3a0ebfc2:/# df /
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 67327600 16335448 47549056 26% /
root@565b3a0ebfc2:/# pwd
/
root@565b3a0ebfc2:/# whoami
root
root@565b3a0ebfc2:/# exit
exit
$
This issue is occurring from today evening… Otherwise this docker file was working fine..
How to resolve this download error? What does this error signify?
-
Home -
BrowserAutomationStudio -
Поддержка -
Ошибка при выполнении get запроса с ресурсом.
This topic has been deleted. Only users with topic management privileges can see it.
-
При попытке вставить текст из ресурса в get запросе получаю вот такую вот ошибку, как только уберу ресурс все нормально выполняется. Как исправить?
Поток №1 : transfer closed with 173 bytes remaining to read
-
@Canine а если вместо ресурса вставить переменную? Перед гетом в переменную взять из ресурса.
сомневаюсь вообще, что это связано с тем, что там вставлено, пер. или ресурс.
-
@out Пробовал,та же ошибка, если вставить в переменную ресурс. В ресурсе ссылка https://goo.gl/XXX и текст на русском.
-
@Canine скорее всего сильно большой текст. Посмотрите через браузер на сайте ограничение по символам
-
@DrPrime максимум 2048 символов, у меня 77 символов.
-
Сейчас попробовал просто вставить текст напрямую, все равно ошибка.
-
Как я понял ошибка возникает при добавлении пробела.Это я пытаюсь описание добавить к фото в апи вк.
-
@Canine
Как я понял ошибка возникает при добавлении пробела
Юзайте encodeUriComponent
-
@support просто обернуть переменную encodeUriComponent или выполнить код и потом полученную переменную вставить?
-
@Canine Я не знаю из чего состоит переменная у вас. encodeUriComponent нужно применять ко всем частям query string
site.com/action?q1=encodeUriComponent([[Q1]])&q2=encodeUriComponent([[Q2]])&q3=encodeUriComponent([[Q3]])
-
-
@support В чем ошибка у элемента caption?Выдает ту же ошибку.(со string, а с expression ошибка в коде)
https://api.vk.com/method/photos.save?album_id=[[AID]]&group_id=[[GID]]&photos_list=[[PHOTOS_LIST]]&hash=[[HASH]]&server=[[SERVER]]&caption=encodeUriComponent([[OPISANIE]])&access_token=[[ACCESS_TOKEN]]
-
@Canine
encodeUriComponent([[OPISANIE]])
Нельзя просто вставлять js в строку.
Нужно или предварительно применить encodeUriComponent на переменную
-
@support Выполнил код и все получилось, спасибо!
-
-
1
0
Votes1
Posts86
Views -
2
0
Votes2
Posts137
Views -
2
0
Votes2
Posts594
Views -
2
0
Votes2
Posts109
Views -
2
0
Votes2
Posts163
Views -
1
0
Votes1
Posts49
Views -
9
0
Votes9
Posts161
Views -
3
0
Votes3
Posts144
Views
Я получаю эту ошибку во время извлечения данных из Instagram (в основном ~ 8000 изображений и комментариев были получены правильно, и внезапно я получаю следующую ошибку):
cURL error 18: transfer closed with 3789 bytes remaining to read (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)
Единственная часть, которую я использовал curl в моем коде:
function url_exists($url) {
if (!$fp = curl_init($url)) return false;
return true;
}
и URL используется здесь:
$feed_img_url = $feed[$idx]->getImageVersions2()->candidates[0]->getUrl()."n";
if (url_exists($feed_img_url)==true) {
$img = "results/".$feed_id_str."/".$feed_id_str.".jpeg";
file_put_contents($img, file_get_contents($feed_img_url));
}
Он не сообщает, какая строка вызывает ошибку, но я предполагаю, что это исключение происходит из одного из вышеперечисленных, поскольку я нигде больше не использовал URL. Эта часть $feed[$idx]->getImageVersions2()->candidates[0]->getUrl()."n";
из Instagram PHP API, как в https://github.com/mgp25/Instagram-API
Пожалуйста, предложите исправления этой проблемы.
Дополнительная информация: это происходит при получении данных из https://www.instagram.com/gegengrader/ хотя он не имеет много постов, посты имеют много лайков, и только 29 постов (изображений) были получены. Тем не менее, я не уверен, является ли это проблемой ограничения скорости API или нет. Если это так, дайте мне знать, как это исправить.
0
Решение
Поэтому я понял, что, когда я просматриваю эту учетную запись Instagram вручную, не все загружается в любом случае, и его загрузка занимает много времени. Я использовал их, и теперь, по крайней мере, я получаю 70 из 130 фишек:
function url_exists($url) {
if (!$cURL = curl_init($url)) {
return false;
}
curl_setopt($cURL, CURLOPT_HTTPHEADER, array('Expect:'));
return true;
}
а также
catch (Exception $e) {
echo $e->getMessage();
if (strpos($e->getMessage(), 'cURL error 18: transfer closed') !== false) {
continue;
}
}
Возможно, не лучшее решение, но оно служит моим потребностям. Пожалуйста, не стесняйтесь добавлять свои ответы.
1
Другие решения
Других решений пока нет …
curl error 18 — передача закрыта с оставшимися данными чтения
при получении данных с URL-адреса с помощью curl я иногда (в 80% случаев) получаю
ошибка 18: передача закрыта с оставшимися данными чтения
Тогда часть возвращенных данных отсутствует. Странно то, что этого никогда не происходит, когда для CURLOPT_RETURNTRANSFER установлено значение false, то есть функция curl_exec не возвращает данные, а отображает содержимое напрямую.
В чем может быть проблема? Могу ли я установить некоторые параметры, чтобы избежать такого поведения?
Бьюсь об заклад, это связано с неправильным Content-Length
заголовок, отправленный партнером. Мой совет — пусть curl сам устанавливает длину.
ответ дан 05 мар ’19, в 16:03
Строка ошибки — это именно то, что видит libcurl: поскольку он получает поток кодирования фрагментов, он знает, когда в фрагменте остались данные для приема. Когда соединение закрывается, libcurl знает, что последний полученный фрагмент был неполным. Тогда вы получите этот код ошибки.
Вы ничего не можете сделать, чтобы избежать этой ошибки с неизмененным запросом, но вы может попробуйте обойти это, отправив вместо этого запрос HTTP 1.0 (поскольку в этом случае кодирование по частям не произойдет), но факт в том, что это, скорее всего, ошибка на сервере или в вашей сети / настройке.
Создан 06 ноя.
У меня была та же проблема, но мне удалось ее исправить, подавив заголовок Expect: 100-continue, который обычно отправляет cURL (ниже приведен код PHP, но он должен работать аналогично с другими API cURL):
curl_setopt($curl, CURLOPT_HTTPHEADER, array('Expect:'));
Между прочим, я отправляю вызовы на HTTP-сервер, который включен в JDK 6 REST, который имеет всевозможные проблемы. В этом случае он сначала отправляет ответ 100, а затем с некоторыми запросами неправильно отправляет следующий ответ 200.
ответ дан 25 окт ’17, 10:10
Также вижу эту ошибку во время использования Guzzle. Следующий заголовок исправил это для меня:
'headers' => [
'accept-encoding' => 'gzip, deflate',
],
Я отправил запрос почтальону, который дал мне полный ответ и без ошибок. Затем я начал добавлять заголовки, которые Postman отправляет в запрос Guzzle, и это был тот, который исправил это.
ответ дан 30 апр.
Я получил эту ошибку, когда мой серверный процесс получил исключение на полпути во время генерации ответа и просто закрыл соединение, не прощаясь. curl по-прежнему ожидал данных от соединения и жаловался (справедливо).
Создан 27 июля ’14, 08:07
У меня была эта проблема с pycurl, и я решил ее, используя
c.setopt(pycurl.HTTP_VERSION, pycurl.CURL_HTTP_VERSION_1_0)
как Эрик Кэрон говорит.
ответ дан 23 мая ’17, 13:05
Так я решил эту ошибку.
$ch = curl_init ();
curl_setopt ( $ch, CURLOPT_URL, 'http://www.someurl/' );
curl_setopt ( $ch, CURLOPT_TIMEOUT, 30);
ob_start();
$response = curl_exec ( $ch );
$data = ob_get_clean();
if(curl_getinfo($ch, CURLINFO_HTTP_CODE) == 200 ) success;
Ошибка все еще возникает, но я могу обрабатывать данные ответа в переменной.
ответ дан 24 окт ’12, 09:10
Я получил эту ошибку, когда случайно загружал файл на себя.
(Я создал символическую ссылку в монтировании sshfs удаленного каталога, чтобы сделать его доступным для загрузки, забыл переключить рабочий каталог и использовал -OJ
).
Думаю, когда вы это прочтете, это вам не поможет, поскольку это означает, что ваш файл был удален.
Создан 26 янв.
Возникла аналогичная проблема, мой сервер находится за nginx. В журнале веб-сервера (Python flask) ошибки нет, но есть сообщения об ошибках в журнале nginx.
[критический] 31054 # 31054: * 269464 open () «/ var / cache / nginx / proxy_temp / 3/45/0000000453» не удалось (13: разрешение отказано) при чтении восходящего потока
Я исправил эту проблему, исправив разрешение каталога:
/var/cache/nginx
ответ дан 18 мая ’20, 12:05
Я получил эту ошибку, когда на моем сервере закончилось дисковое пространство, и я закрыл соединение на полпути во время генерации ответа и просто закрыл соединение
Создан 10 фев.
Не тот ответ, который вы ищете? Просмотрите другие вопросы с метками
php
curl
or задайте свой вопрос.
Проблемы при отправки xml пакетов через POST. Если длина пакета <= 1024, то все гуд, если размер больше хотя бы на один, то все получаю ошибку №18 transfer closed with outstanding read data remaining.
Вот функция отправки
function _xmlHttpsReq2($addr, $xml){
echo "n".$addr;
echo "n".$xml;
echo "n".strlen($xml);
//if(strlen($xml)>1000)
//$xml = substr($xml,0,1025);
//$header[] = "Host: mydomain.test";
$header[] = "Content-type: text/xml; charset:utf-8";
$header[] = "Content-length: ".strlen($xml);
$ch = curl_init($addr);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch, CURLOPT_POST,1);
curl_setopt($ch, CURLOPT_POSTFIELDS,$xml);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER,0);
curl_setopt ($ch, CURLOPT_SSL_VERIFYHOST, 0);
curl_setopt($ch, CURLOPT_HTTPHEADER, $header );$result=curl_exec($ch);
if( curl_errno($ch) != 0 ) {
$info = curl_getinfo($ch);
foreach($info as $name=>$value)
echo "n".$name.'='.$value;
echo "nres= ".$result;
echo "n".'CURL_error: ' . curl_errno($ch) . ', ' . curl_error($ch);
return 'CURL_error: ' . curl_errno($ch) . ', ' . curl_error($ch);
};
$info = curl_getinfo($ch);
foreach($info as $name=>$value)
echo "n".$name.'='.$value;
echo "nres= ".$result;
curl_close($ch);
return $result;
}
Вот результат отработки функции без ошибки (пакет менее 1024):
content_type=text/xml;charset=UTF-8
http_code=200
header_size=149
request_size=393
filetime=-1
ssl_verify_result=19
redirect_count=0
total_time=0.267345
namelookup_time=4.5E-5
connect_time=0.026682
pretransfer_time=0.155036
size_upload=258
size_download=17549
speed_download=65641
speed_upload=965
download_content_length=-1
upload_content_length=-1
starttransfer_time=0.267161
redirect_time=0
А вот при ошибке (пакет более 1024):
content_type=text/xml;charset=UTF-8
http_code=200
header_size=257
request_size=158
filetime=-1
ssl_verify_result=19
redirect_count=0
total_time=0.226429
namelookup_time=5.0E-5
connect_time=0.029792
pretransfer_time=0.158446
size_upload=1206
size_download=0
speed_download=0
speed_upload=5326
download_content_length=-1
upload_content_length=1206
starttransfer_time=0.184043
redirect_time=0
CURL_error: 18, transfer closed with outstanding read data remaining
Java
-
Search
-
Search all Forums
-
Search this Forum
-
Search this Thread
-
-
Tools
-
Jump to Forum
-
-
#2
Feb 2, 2023
Yukes-
View User Profile
-
View Posts
-
Send Message
- Newly Spawned
- Join Date:
2/2/2023
- Posts:
1
- Member Details
Bump~
I too am getting this error and am finding no other post on any forum describing this specific issue with the connection closing. This just started happening out of nowhere. I was playing it fine yesterday.
-
-
#3
Feb 2, 2023
yaunzz-
View User Profile
-
View Posts
-
Send Message
- Newly Spawned
- Join Date:
2/2/2023
- Posts:
1
- Member Details
im having the same issue, no amount of uninstall/reinstall, factory reset, file management etc has worked. it was working fine for me just a few hours ago
-
-
#4
Feb 2, 2023
Came here to say that I’m having the same issue. I’m unable to play any versions of minecraft that I haven’t already played (i.e. previously downloaded) because when I press «Play» on the minecraft launcher, it fails to download certain files (most notably .ogg files).
-
#6
Feb 2, 2023
Hello! Also having this issue, it is always .ogg files which from what I’ve seen are just music files… so frustrating
-
#7
Feb 2, 2023
I have same problem and i not know how to solve this
-
#8
Feb 2, 2023
Came back to say that it seems to be working today (for whatever reason) and I haven’t changed a single thing on my end.
-
#9
Feb 9, 2023
I’m having this issue. I’ve uninstalled, reinstalled. Nothing works.
- To post a comment, please login.
Posts Quoted:
Reply
Clear All Quotes
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and
privacy statement. We’ll occasionally send you account related emails.
Already on GitHub?
Sign in
to your account
Comments
I’m running docker-gitlab with a pretty standard setup (see below) behind a SSL-terminating nginx (on the docker host). Normally everything is running perfectly, cloning over ssh works like a charm. But sometimes cloning over https fails with an error message:
~/Desktop → git clone https://example.com/foobar/foo.git
Cloning into 'foo'...
fatal: unable to access 'https://example.com/foobar/foo.git/': transfer closed with 2737 bytes remaining to read
I’ve looked into the logs. My SSL-terminator (nginx) says upstream prematurely closed connection while reading upstream
, so I guess it’s not at fault. Next I looked into the nginx log at /var/log/gitlab/nginx/gitlab_error.log
, and found this:
2014/12/15 16:26:02 [alert] 282#0: *1246 readv() failed (13: Permission denied) while reading upstream
I can’t figure out the reason. Obviously the socket has the correct permissions, otherwise it wouldn’t work at all. I also found that improper permissions on the proxy_temp folder can cause this, but the permissions on /var/lib/nginx/proxy/
look fine.
I’d be very happy to help debug this further, but I don’t know where to start. Any ideas?
docker-gitlab variables are:
GITLAB_HOST: ...
GITLAB_PORT: 443
GITLAB_EMAIL: ...
GITLAB_HTTPS: true
GITLAB_HTTPS_HSTS: true
GITLAB_HTTPS_HSTS_MAXAGE: 2592000
JIRA_URL: ...
SMTP_USER: ...
SMTP_PASS: ...
lucas-clemente
changed the title
Error: transfer closed with xxx bytes remaining to read when cloning over HTTPS
Error: «transfer closed with xxx bytes remaining to read» when cloning over HTTPS
Dec 16, 2014
@lucas-clemente this is some problem with libssl package in ubuntu. I had the same issue a while back and had found some online resources to fix it, which if i remember correctly required compiling libssl from source.
I avoided compiling libssl from source for various reasons, I just decided to wait it out hoping that the issue would automatically be resolved when ubuntu/debian updates the libssl package. So far it seems like it has not been the case.
Thanks for your reply! I couldn’t find anything on openssl / nginx and the quoted error message, do you maybe still remember where you read something about it?
Somehow I don’t understand how it can be a problem with libssl though. As I wrote above, the TLS connection is terminated from nginx running directly on the docker host, proxying to the gitlab docker container. However, the Permission denied
error appears on the nginx proxy running inside the container. But then again, I might just be misinterpreting things
Happy new year!
I will try to look it up and let you know.
On Friday 02 January 2015 07:49 PM, Lucas Clemente wrote:
Thanks for your reply! I couldn’t find anything on openssl / nginx and
the quoted error message, do you maybe still remember where you read
something about it?Somehow I don’t understand how it can be a problem with libssl though.
As I wrote above, the TLS connection is terminated from nginx running
/directly on the docker host/, proxying to the gitlab docker container.
However, the |Permission denied| error appears on the nginx proxy
running /inside the container/. But then again, I might just be
misinterpreting thingsHappy new year!
—
Reply to this email directly or view it on GitHub
#226 (comment).
@lucas-clemente I was checking this issue today and found that the git process prematurely gives up on large repos.
# git clone https://git.example.com/opensource/openwrt.git Cloning into 'openwrt'... remote: Counting objects: 290509, done. remote: Compressing objects: 100% (79939/79939), done. error: RPC failed; result=56, HTTP code = 200MiB | 8.62 MiB/s fatal: The remote end hung up unexpectedly fatal: early EOF fatal: index-pack failed
This happens on both http and https urls. Worth nothing is that it works sometimes. Next I tried connecting directly to the containers http port
# git clone http://$(docker inspect --format {{.NetworkSettings.IPAddress}} gitlab)/opensource/openwrt.git -v Cloning into 'openwrt'... POST git-upload-pack (240 bytes) remote: Counting objects: 290509, done. remote: Compressing objects: 100% (79939/79939), done. remote: Total 290509 (delta 198066), reused 289751 (delta 197308) Receiving objects: 100% (290509/290509), 106.91 MiB | 20.02 MiB/s, done. Resolving deltas: 100% (198066/198066), done. Checking connectivity... done.
And had no problems cloning using the git http clone url. Maybe you can give it a shot as well. If you see this behaviour as well, then it appears that there would be some configuration required at the reverse proxy.
I found the following reports of the same issue
- https://github.com/gitlabhq/gitlabhq/issues/6832
- https://gitlab.com/gitlab-org/gitlab-ce/issues/232
hi,
i have the same problem with big repositories. i increased the unicorn timeout as well as the workers, but now the unicorn workerkiller stops everything. now i patch
https://gitlab.com/gitlab-org/gitlab-ce/blob/master/config.ru
to increase the memory settings. it would be great for your dockerfile to have some settings for this too.
solved!
multiple problems, but
- first patched memory setting for the unicorn worker
- then used a modern git (ubuntu 14.04 did not work for me)
- and next: we had the docker image behind another loadbalancer (a farm of nginx’s). they do proxy-buffering with a max-tempfilesize (http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_max_temp_file_size) of 1G. and this frontend-ELB cutted of my clone. so i added
add_header X-Accel-Buffering no;
to the nginx inside your docker container
and now it works! i don’t know if it would be a good idea to put this setting inside your dockerfile, but an option would not be too bad
@ulrichSchreiner cool. will try out your changes and make the changes as required. thanks
hi,
one additional note: i did a git clone https://...
from a WAN client to a loadbalancer with a nginx which then connected to your dockercontainer which also contains a nginx. my repo was about ~1.2GB of data and gitlab pushed out the bytes really fast to your nginx. an your nginx pushed out the bytes to our sencond nginx (our LB). but here we could not deliver the bytes as fast as we wanted (the client was connected via a slow line) and so this LB buffered, because the default is on
(http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering). and the default for the temp-file-size is 1GB. so this all lead to my problem: after 1GB of data the stream of data ended and my client received an error.
but i think if a (slow) client connects directly to your nginx you would have the same problem, because you also use proxy_buffering and the default size for temp-file-size is also 1GB.
-> if you have a loadbalancer in front, it would be a good option to include the add_header X-Accel-Buffering no;
(optional). But if you do not have a LB in front, you should have the options to disable proxy-buffering and/or set the size for the proxy_max_temp_file_size
.
hi all,
i experience this problem when using gitlab-ci as it’s pulling the repo via https. can anyone please be more detailed what i have to adjust so that it works?
maybe i should say that the concerning repo is not that big (~15 MB) and that it sometimes works (third or forth retry)
@smoebody I was trying to reproduce this issue recently to test a probable fix. However I was not able to reproduce the issue. Can you try a fix for me.
- Login to the running gitlab container
docker exec -it gitlab bash
- Edit
/etc/nginx/sites-enabled/gitlab
server block to replace
upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket fail_timeout=0; }
with
upstream gitlab { server 127.0.0.1:8080 fail_timeout=0; }
- Stop the nginx server
supervisorctl stop nginx
- Start the nginx server
supervisorctl start nginx
- Try cloning via http/https clone url
- Let me know if it fixes the issue.
- If you are able to clone, test many more times to be absolutely sure. Because when I was seeing this issue it was hit or miss. It would work sometimes and would fail other times.
If this does not resolve the issue, revert the above changes and try the changes from PR #269. You only need to make the changes to /etc/nginx/sites-enabled/gitlab
where you want to set the following options in the nginx configuration
add_header X-Accel-Buffering no; proxy_buffering off;
Refer to the PR for the appropriate place to insert these settings in the nginx configuration. Once you have these changes in place stop and start the nginx server the same way as described above, i.e. steps 3 and 4.
Let me know the results.
@smoebody fyi, supervisorctl status
will let you know if all the daemons are running
@sameersbn, sorry for the delay.
i tried the your first suggestion by changing from unix-socket to tcp-socket and it seems that did the trick, although i would have preferred the second fix to solve the problem as its more plausible to me
But for now i did 5 build retries without the error occuring again.
thanks again for your help!
@sameersbn no i did not. as i said, the first fix solved the problem, there was no need to try the second.
@smoebody ok.. can you revert the first fix and apply the second fix and let me know if resolves the issue as well?
@sameersbn ok, with the issue i get the following error:
2015/04/08 16:07:43 [alert] 2773#0: *15 readv() failed (13: Permission denied) while reading upstream, client: 172.17.0.116, server: gitlab.xxx.xxx, request: "GET /xxx/xxx.git/info/refs?service=git-upload-pack HTTP/1.1", upstream: "http://unix:/home/git/gitlab/tmp/sockets/gitlab.socket:/xxx/xxxgit/info/refs?service=git-upload-pack", host: "gitlab.xxx.xxx"
as you can see, nginx prefixes the socket with an http:// sometimes… thats when its not working.
both fixes mentioned above resolve the issue. for the second fix inserting the proxy_buffering off
was enough.
В процессе загрузки фотографий на yandex disk, периодически вываливается сообщение об ошибке curl 18: transfer closed with 11 bytes remaining to read, причем эта ошибка не постоянная, на одной и той же итерации может появится а может и нет.
Гугл навел на мысль что дело в передаваемом параметре заголовка Content-Length, но что в нем может быть не так я не могу понять.
Если кто то сталкивался с похожей проблемой или может подсказать какие то варианты решения, я был бы очень признателен.
I’m not that familiar with cURL, so sending this request results in:
root@xyzxyz:~# curl --user 'username' --data-binary '{"jsonrpc":"1.0","id":"curltext","method":"helloWorld","params":[]}' -H 'content-type:text/plain;' http://192.168.56.1:8442
Enter host password for user 'username':
curl: (18) transfer closed with 349 bytes remaining to read
Password itself has been entered.
All related I fund was:
not helpful, since this is for sure not a network problem (server is running on my local machine)
asked Oct 1, 2018 at 12:53
The pyBitmessage is a pure XML-RPC implementation and not a «JSON-RPC» like with Bitcoind. So the correct CURL syntax should be:
curl --user 'username' --data-binary '<methodCall><methodName>helloWorld</methodName><params><param><value><string>hello</string></value></param><param><value><string>World</string></value></param></params></methodCall>' -H 'content-type:text/plain;' http://192.168.56.1:8442
answered Jan 13, 2019 at 13:14